Mar 12 13:09:40 crc systemd[1]: Starting Kubernetes Kubelet... Mar 12 13:09:40 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:40 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 12 13:09:41 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 12 13:09:42 crc kubenswrapper[4778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 13:09:42 crc kubenswrapper[4778]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 12 13:09:42 crc kubenswrapper[4778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 13:09:42 crc kubenswrapper[4778]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 13:09:42 crc kubenswrapper[4778]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 12 13:09:42 crc kubenswrapper[4778]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.006699 4778 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.009959 4778 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.009980 4778 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.009987 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.009993 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.009999 4778 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010004 4778 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010012 4778 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010017 4778 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010022 4778 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010027 4778 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010032 4778 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010037 4778 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010042 4778 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010047 4778 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010052 4778 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010056 4778 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010061 4778 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010066 4778 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010071 4778 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010084 4778 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010089 4778 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010094 4778 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010099 4778 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010104 4778 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010109 4778 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010113 4778 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010118 4778 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010123 4778 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010129 4778 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010133 4778 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010138 4778 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010144 4778 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010150 4778 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010155 4778 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010160 4778 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010167 4778 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010174 4778 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010179 4778 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010208 4778 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010216 4778 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010224 4778 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010230 4778 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010237 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010243 4778 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010249 4778 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010255 4778 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010260 4778 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010265 4778 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010270 4778 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010275 4778 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010280 4778 feature_gate.go:330] unrecognized feature gate: Example Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010285 4778 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010290 4778 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010295 4778 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010300 4778 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010306 4778 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010311 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010316 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010321 4778 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010326 4778 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010331 4778 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010338 4778 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010342 4778 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010347 4778 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010352 4778 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010357 4778 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010361 4778 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010366 4778 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010371 4778 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010376 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.010380 4778 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011802 4778 flags.go:64] FLAG: --address="0.0.0.0" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011817 4778 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011826 4778 flags.go:64] FLAG: --anonymous-auth="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011834 4778 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011841 4778 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011847 4778 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011855 4778 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011861 4778 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011868 4778 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011874 4778 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011880 4778 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011886 4778 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011891 4778 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011897 4778 flags.go:64] FLAG: --cgroup-root="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011902 4778 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011908 4778 flags.go:64] FLAG: --client-ca-file="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011913 4778 flags.go:64] FLAG: --cloud-config="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011919 4778 flags.go:64] FLAG: --cloud-provider="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011925 4778 flags.go:64] FLAG: --cluster-dns="[]" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011933 4778 flags.go:64] FLAG: --cluster-domain="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011941 4778 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011947 4778 flags.go:64] FLAG: --config-dir="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011952 4778 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011958 4778 flags.go:64] FLAG: --container-log-max-files="5" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011966 4778 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011971 4778 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011977 4778 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011983 4778 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011988 4778 flags.go:64] FLAG: --contention-profiling="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011994 4778 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.011999 4778 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012005 4778 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012011 4778 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012018 4778 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012023 4778 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012028 4778 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012034 4778 flags.go:64] FLAG: --enable-load-reader="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012039 4778 flags.go:64] FLAG: --enable-server="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012044 4778 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012051 4778 flags.go:64] FLAG: --event-burst="100" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012057 4778 flags.go:64] FLAG: --event-qps="50" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012062 4778 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012068 4778 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012073 4778 flags.go:64] FLAG: --eviction-hard="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012080 4778 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012085 4778 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012091 4778 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012096 4778 flags.go:64] FLAG: --eviction-soft="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012102 4778 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012132 4778 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012141 4778 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012148 4778 flags.go:64] FLAG: --experimental-mounter-path="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012156 4778 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012164 4778 flags.go:64] FLAG: --fail-swap-on="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012171 4778 flags.go:64] FLAG: --feature-gates="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012179 4778 flags.go:64] FLAG: --file-check-frequency="20s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012204 4778 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012211 4778 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012216 4778 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012222 4778 flags.go:64] FLAG: --healthz-port="10248" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012228 4778 flags.go:64] FLAG: --help="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012234 4778 flags.go:64] FLAG: --hostname-override="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012239 4778 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012245 4778 flags.go:64] FLAG: --http-check-frequency="20s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012250 4778 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012256 4778 flags.go:64] FLAG: --image-credential-provider-config="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012261 4778 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012267 4778 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012272 4778 flags.go:64] FLAG: --image-service-endpoint="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012277 4778 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012283 4778 flags.go:64] FLAG: --kube-api-burst="100" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012289 4778 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012294 4778 flags.go:64] FLAG: --kube-api-qps="50" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012300 4778 flags.go:64] FLAG: --kube-reserved="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012305 4778 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012311 4778 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012317 4778 flags.go:64] FLAG: --kubelet-cgroups="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012322 4778 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012328 4778 flags.go:64] FLAG: --lock-file="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012333 4778 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012339 4778 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012344 4778 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012353 4778 flags.go:64] FLAG: --log-json-split-stream="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012359 4778 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012364 4778 flags.go:64] FLAG: --log-text-split-stream="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012370 4778 flags.go:64] FLAG: --logging-format="text" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012376 4778 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012382 4778 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012387 4778 flags.go:64] FLAG: --manifest-url="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012392 4778 flags.go:64] FLAG: --manifest-url-header="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012400 4778 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012405 4778 flags.go:64] FLAG: --max-open-files="1000000" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012413 4778 flags.go:64] FLAG: --max-pods="110" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012419 4778 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012424 4778 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012430 4778 flags.go:64] FLAG: --memory-manager-policy="None" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012435 4778 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012441 4778 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012447 4778 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012452 4778 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012464 4778 flags.go:64] FLAG: --node-status-max-images="50" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012470 4778 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012475 4778 flags.go:64] FLAG: --oom-score-adj="-999" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012481 4778 flags.go:64] FLAG: --pod-cidr="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012487 4778 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012494 4778 flags.go:64] FLAG: --pod-manifest-path="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012499 4778 flags.go:64] FLAG: --pod-max-pids="-1" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012505 4778 flags.go:64] FLAG: --pods-per-core="0" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012511 4778 flags.go:64] FLAG: --port="10250" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012517 4778 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012522 4778 flags.go:64] FLAG: --provider-id="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012528 4778 flags.go:64] FLAG: --qos-reserved="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012534 4778 flags.go:64] FLAG: --read-only-port="10255" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012540 4778 flags.go:64] FLAG: --register-node="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012545 4778 flags.go:64] FLAG: --register-schedulable="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012551 4778 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012561 4778 flags.go:64] FLAG: --registry-burst="10" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012567 4778 flags.go:64] FLAG: --registry-qps="5" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012572 4778 flags.go:64] FLAG: --reserved-cpus="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012578 4778 flags.go:64] FLAG: --reserved-memory="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012585 4778 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012591 4778 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012597 4778 flags.go:64] FLAG: --rotate-certificates="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012602 4778 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012608 4778 flags.go:64] FLAG: --runonce="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012613 4778 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012619 4778 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012625 4778 flags.go:64] FLAG: --seccomp-default="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012631 4778 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012636 4778 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012642 4778 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012647 4778 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012653 4778 flags.go:64] FLAG: --storage-driver-password="root" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012659 4778 flags.go:64] FLAG: --storage-driver-secure="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012664 4778 flags.go:64] FLAG: --storage-driver-table="stats" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012669 4778 flags.go:64] FLAG: --storage-driver-user="root" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012675 4778 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012680 4778 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012687 4778 flags.go:64] FLAG: --system-cgroups="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012693 4778 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012701 4778 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012706 4778 flags.go:64] FLAG: --tls-cert-file="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012712 4778 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012719 4778 flags.go:64] FLAG: --tls-min-version="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012725 4778 flags.go:64] FLAG: --tls-private-key-file="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012730 4778 flags.go:64] FLAG: --topology-manager-policy="none" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012736 4778 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012742 4778 flags.go:64] FLAG: --topology-manager-scope="container" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012749 4778 flags.go:64] FLAG: --v="2" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012756 4778 flags.go:64] FLAG: --version="false" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012763 4778 flags.go:64] FLAG: --vmodule="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012770 4778 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.012776 4778 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012906 4778 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012913 4778 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012919 4778 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012925 4778 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012930 4778 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012935 4778 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012940 4778 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012945 4778 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012951 4778 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012955 4778 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012960 4778 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012967 4778 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012972 4778 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012977 4778 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012982 4778 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012986 4778 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012992 4778 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.012996 4778 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013001 4778 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013006 4778 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013010 4778 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013015 4778 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013022 4778 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013028 4778 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013034 4778 feature_gate.go:330] unrecognized feature gate: Example Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013040 4778 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013046 4778 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013052 4778 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013057 4778 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013062 4778 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013067 4778 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013072 4778 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013078 4778 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013082 4778 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013087 4778 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013092 4778 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013097 4778 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013102 4778 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013107 4778 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013112 4778 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013117 4778 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013121 4778 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013128 4778 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013133 4778 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013139 4778 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013143 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013148 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013154 4778 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013159 4778 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013164 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013169 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013174 4778 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013178 4778 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013202 4778 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013207 4778 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013212 4778 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013216 4778 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013222 4778 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013228 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013232 4778 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013237 4778 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013244 4778 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013254 4778 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013260 4778 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013266 4778 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013273 4778 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013279 4778 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013284 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013290 4778 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013295 4778 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.013300 4778 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.014035 4778 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.026235 4778 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.026292 4778 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026378 4778 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026387 4778 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026392 4778 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026396 4778 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026400 4778 feature_gate.go:330] unrecognized feature gate: Example Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026404 4778 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026408 4778 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026412 4778 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026416 4778 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026419 4778 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026423 4778 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026427 4778 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026430 4778 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026434 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026438 4778 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026442 4778 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026446 4778 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026451 4778 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026455 4778 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026458 4778 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026462 4778 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026466 4778 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026470 4778 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026473 4778 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026480 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026484 4778 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026488 4778 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026492 4778 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026496 4778 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026499 4778 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026503 4778 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026509 4778 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026514 4778 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026517 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026521 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026525 4778 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026529 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026533 4778 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026537 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026543 4778 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026549 4778 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026555 4778 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026562 4778 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026566 4778 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026571 4778 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026575 4778 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026579 4778 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026584 4778 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026587 4778 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026594 4778 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026597 4778 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026601 4778 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026605 4778 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026609 4778 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026613 4778 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026617 4778 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026622 4778 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026630 4778 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026634 4778 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026638 4778 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026643 4778 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026647 4778 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026651 4778 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026656 4778 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026661 4778 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026666 4778 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026671 4778 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026676 4778 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026680 4778 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026684 4778 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026689 4778 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.026697 4778 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026895 4778 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026905 4778 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026910 4778 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026914 4778 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026918 4778 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026922 4778 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026927 4778 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026931 4778 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026936 4778 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026944 4778 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026949 4778 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026954 4778 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026958 4778 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026963 4778 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026967 4778 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026971 4778 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026976 4778 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026981 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026986 4778 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026990 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026994 4778 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.026998 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027002 4778 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027007 4778 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027013 4778 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027017 4778 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027021 4778 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027025 4778 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027028 4778 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027032 4778 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027036 4778 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027041 4778 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027045 4778 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027049 4778 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027052 4778 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027056 4778 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027061 4778 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027064 4778 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027068 4778 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027072 4778 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027076 4778 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027080 4778 feature_gate.go:330] unrecognized feature gate: Example Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027084 4778 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027089 4778 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027093 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027096 4778 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027100 4778 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027104 4778 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027108 4778 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027112 4778 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027115 4778 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027119 4778 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027123 4778 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027127 4778 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027131 4778 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027134 4778 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027138 4778 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027142 4778 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027146 4778 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027150 4778 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027154 4778 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027158 4778 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027163 4778 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027168 4778 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027172 4778 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027175 4778 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027179 4778 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027200 4778 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027204 4778 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027208 4778 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.027212 4778 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.027218 4778 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.027455 4778 server.go:940] "Client rotation is on, will bootstrap in background" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.034297 4778 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.040530 4778 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.046630 4778 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.049009 4778 server.go:997] "Starting client certificate rotation" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.049069 4778 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.049248 4778 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.077421 4778 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.080729 4778 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.080887 4778 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.097660 4778 log.go:25] "Validated CRI v1 runtime API" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.134131 4778 log.go:25] "Validated CRI v1 image API" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.136952 4778 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.141766 4778 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-12-13-04-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.141814 4778 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.168613 4778 manager.go:217] Machine: {Timestamp:2026-03-12 13:09:42.165261278 +0000 UTC m=+0.613956754 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:65870ff3-f0f2-4ca4-b489-075d672e37ad BootID:9825271f-f529-4477-b3b1-2a00dbf9b03e Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:4d:1f:b0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:4d:1f:b0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:82:f4:d4 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9f:64:26 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:cd:a9:3c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:24:0d:7d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f6:1b:c7:d6:a8:65 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:c9:da:e9:2e:05 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.169011 4778 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.169232 4778 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.170526 4778 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.170825 4778 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.170926 4778 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.171383 4778 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.171403 4778 container_manager_linux.go:303] "Creating device plugin manager" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.171927 4778 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.171977 4778 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.172327 4778 state_mem.go:36] "Initialized new in-memory state store" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.172460 4778 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.176915 4778 kubelet.go:418] "Attempting to sync node with API server" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.176984 4778 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.177034 4778 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.177060 4778 kubelet.go:324] "Adding apiserver pod source" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.177081 4778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.184849 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.185090 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.186619 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.186743 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.187278 4778 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.188444 4778 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.190113 4778 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192086 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192133 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192153 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192172 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192231 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192251 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192269 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192298 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192320 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192342 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192366 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.192385 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.193520 4778 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.194298 4778 server.go:1280] "Started kubelet" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.195294 4778 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.195331 4778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.196523 4778 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.196488 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:42 crc systemd[1]: Started Kubernetes Kubelet. Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.198364 4778 server.go:460] "Adding debug handlers to kubelet server" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.198456 4778 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.198502 4778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.199178 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.199327 4778 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.199344 4778 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.199425 4778 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.200936 4778 factory.go:55] Registering systemd factory Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.200975 4778 factory.go:221] Registration of the systemd container factory successfully Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.200862 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="200ms" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.201380 4778 factory.go:153] Registering CRI-O factory Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.201392 4778 factory.go:221] Registration of the crio container factory successfully Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.201453 4778 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.201485 4778 factory.go:103] Registering Raw factory Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.201501 4778 manager.go:1196] Started watching for new ooms in manager Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.202029 4778 manager.go:319] Starting recovery of all containers Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.202642 4778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.32:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c1a09b06f665b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,LastTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.207052 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.207134 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216440 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216516 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216540 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216562 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216582 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216605 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216624 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216644 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216668 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216688 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216708 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216729 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216747 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216769 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216790 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216810 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216830 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216847 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216866 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216883 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216902 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216922 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216940 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216958 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216979 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.216998 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217021 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217042 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217062 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217080 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217100 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217117 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217136 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217153 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217172 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217224 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217253 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217278 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217301 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217323 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217341 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217358 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217375 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217394 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217411 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217428 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217447 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217466 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217483 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217501 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217521 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217540 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217566 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217586 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217606 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217627 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217648 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217665 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217682 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217702 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217722 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217740 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217761 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217779 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217796 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217814 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217832 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217851 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217869 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217886 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217905 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217926 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217943 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217960 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217978 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.217996 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218013 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218033 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218065 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218084 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218103 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218127 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218149 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218171 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218235 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218262 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218285 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218307 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218332 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218355 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218380 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218404 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218426 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218448 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218474 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218496 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218519 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218542 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218564 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218589 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218613 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218639 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218664 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218688 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218722 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218811 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218847 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218877 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.218904 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219027 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219062 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219090 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219118 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219146 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219173 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219232 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219257 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219281 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219303 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219327 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219349 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219370 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219393 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219416 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219440 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219464 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219495 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219519 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219543 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219566 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219587 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219608 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219632 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219658 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219680 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.219714 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222053 4778 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222104 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222134 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222159 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222212 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222237 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222264 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222287 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222309 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222331 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222355 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222379 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222404 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222428 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222451 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222481 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222506 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222533 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222572 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222597 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222644 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222670 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222702 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222727 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222833 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222863 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222887 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222941 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222970 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.222997 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223022 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223045 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223070 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223132 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223157 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223211 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223237 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223264 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223291 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223315 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223340 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223365 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223388 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223412 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223437 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223463 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223486 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223511 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223537 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223562 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223585 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223609 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223634 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223660 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223688 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223713 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223736 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223761 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223786 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223810 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223837 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223864 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223891 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223914 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223936 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223959 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.223981 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.224004 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.224029 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.224057 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.224085 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.224110 4778 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.224133 4778 reconstruct.go:97] "Volume reconstruction finished" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.224149 4778 reconciler.go:26] "Reconciler: start to sync state" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.231759 4778 manager.go:324] Recovery completed Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.248056 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.250071 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.250234 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.250321 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.250407 4778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.251979 4778 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.252021 4778 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.252048 4778 state_mem.go:36] "Initialized new in-memory state store" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.252447 4778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.252516 4778 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.252554 4778 kubelet.go:2335] "Starting kubelet main sync loop" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.252759 4778 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.253929 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.254066 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.274100 4778 policy_none.go:49] "None policy: Start" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.275319 4778 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.275356 4778 state_mem.go:35] "Initializing new in-memory state store" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.300026 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.341241 4778 manager.go:334] "Starting Device Plugin manager" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.341294 4778 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.341306 4778 server.go:79] "Starting device plugin registration server" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.341774 4778 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.341786 4778 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.341914 4778 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.342079 4778 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.342090 4778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.353257 4778 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.353434 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.354448 4778 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.354770 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.354797 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.354806 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.354909 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.355433 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.355496 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.355787 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.355811 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.355822 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.355936 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356081 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356111 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356448 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356472 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356480 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356778 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356822 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356838 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.356943 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.357006 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.357018 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.357061 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.357259 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.357328 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.358471 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.358509 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.358520 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.358620 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.358983 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359011 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359331 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359350 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359385 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359711 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359761 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359793 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359822 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359838 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.359860 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.360155 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.360213 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.364302 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.364337 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.364350 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.402219 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="400ms" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426077 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426121 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426144 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426165 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426213 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426253 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426275 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426297 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426318 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426399 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426460 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426508 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426529 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426549 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.426569 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.442297 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.444127 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.444225 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.444240 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.444298 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.445006 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.32:6443: connect: connection refused" node="crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528238 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528709 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528732 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528409 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528775 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528794 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528756 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528824 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528844 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528914 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528938 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.528984 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529004 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529065 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529086 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529119 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529061 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529137 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529162 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529208 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529155 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529240 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529304 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529308 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529406 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529506 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529421 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529560 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529638 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.529672 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.646094 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.647806 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.647860 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.647875 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.647904 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.648368 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.32:6443: connect: connection refused" node="crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.692800 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.701074 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.724981 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.740958 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: I0312 13:09:42.746897 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.765043 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-4b2c618daa0e6b1ef75cf194ff09b66dce8d1fa88a553a44267830757f6def27 WatchSource:0}: Error finding container 4b2c618daa0e6b1ef75cf194ff09b66dce8d1fa88a553a44267830757f6def27: Status 404 returned error can't find the container with id 4b2c618daa0e6b1ef75cf194ff09b66dce8d1fa88a553a44267830757f6def27 Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.767539 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-55142ceb5ed2e803358fa384604caea104bf219bd93a4c658c8c9de8b15c7840 WatchSource:0}: Error finding container 55142ceb5ed2e803358fa384604caea104bf219bd93a4c658c8c9de8b15c7840: Status 404 returned error can't find the container with id 55142ceb5ed2e803358fa384604caea104bf219bd93a4c658c8c9de8b15c7840 Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.774312 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9ebcfe5c002ded96ed4d0793e3666de18a1bc1a593a0d0a806fbe9671a9172bc WatchSource:0}: Error finding container 9ebcfe5c002ded96ed4d0793e3666de18a1bc1a593a0d0a806fbe9671a9172bc: Status 404 returned error can't find the container with id 9ebcfe5c002ded96ed4d0793e3666de18a1bc1a593a0d0a806fbe9671a9172bc Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.777672 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c1f6f7ac237d1a5f10ed7348949f2143a938a6e6fe34647955863ea671f2185f WatchSource:0}: Error finding container c1f6f7ac237d1a5f10ed7348949f2143a938a6e6fe34647955863ea671f2185f: Status 404 returned error can't find the container with id c1f6f7ac237d1a5f10ed7348949f2143a938a6e6fe34647955863ea671f2185f Mar 12 13:09:42 crc kubenswrapper[4778]: W0312 13:09:42.786207 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-46ba979cb11342a0959a7f6691dc84d9fad32fa382e87214f1b3485080cc98b3 WatchSource:0}: Error finding container 46ba979cb11342a0959a7f6691dc84d9fad32fa382e87214f1b3485080cc98b3: Status 404 returned error can't find the container with id 46ba979cb11342a0959a7f6691dc84d9fad32fa382e87214f1b3485080cc98b3 Mar 12 13:09:42 crc kubenswrapper[4778]: E0312 13:09:42.803375 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="800ms" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.049446 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.051068 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.051099 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.051110 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.051132 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:09:43 crc kubenswrapper[4778]: E0312 13:09:43.051456 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.32:6443: connect: connection refused" node="crc" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.198249 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:43 crc kubenswrapper[4778]: W0312 13:09:43.216310 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:43 crc kubenswrapper[4778]: E0312 13:09:43.216400 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.257827 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55142ceb5ed2e803358fa384604caea104bf219bd93a4c658c8c9de8b15c7840"} Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.258904 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b2c618daa0e6b1ef75cf194ff09b66dce8d1fa88a553a44267830757f6def27"} Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.260051 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"46ba979cb11342a0959a7f6691dc84d9fad32fa382e87214f1b3485080cc98b3"} Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.262255 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c1f6f7ac237d1a5f10ed7348949f2143a938a6e6fe34647955863ea671f2185f"} Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.263631 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9ebcfe5c002ded96ed4d0793e3666de18a1bc1a593a0d0a806fbe9671a9172bc"} Mar 12 13:09:43 crc kubenswrapper[4778]: W0312 13:09:43.353800 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:43 crc kubenswrapper[4778]: E0312 13:09:43.353870 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:43 crc kubenswrapper[4778]: W0312 13:09:43.364790 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:43 crc kubenswrapper[4778]: E0312 13:09:43.364933 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:43 crc kubenswrapper[4778]: W0312 13:09:43.483073 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:43 crc kubenswrapper[4778]: E0312 13:09:43.483228 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:43 crc kubenswrapper[4778]: E0312 13:09:43.603914 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="1.6s" Mar 12 13:09:43 crc kubenswrapper[4778]: E0312 13:09:43.816628 4778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.32:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189c1a09b06f665b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,LastTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.852526 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.856912 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.856970 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.856983 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:43 crc kubenswrapper[4778]: I0312 13:09:43.857017 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:09:43 crc kubenswrapper[4778]: E0312 13:09:43.857643 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.32:6443: connect: connection refused" node="crc" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.197463 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.251060 4778 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 13:09:44 crc kubenswrapper[4778]: E0312 13:09:44.252374 4778 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.274465 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964" exitCode=0 Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.274565 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964"} Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.274693 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.275907 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.275969 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.275988 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.276788 4778 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9" exitCode=0 Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.276878 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9"} Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.276943 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.278124 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.278156 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.278173 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.279147 4778 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2" exitCode=0 Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.279241 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2"} Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.279323 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.279326 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.280588 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.280636 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.280653 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.280706 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.280759 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.281022 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.282761 4778 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7cf73fb2fb0de0ce76c16b7db59c94484062b1f4fc5b6df9633c4740f5bbbc0c" exitCode=0 Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.282948 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7cf73fb2fb0de0ce76c16b7db59c94484062b1f4fc5b6df9633c4740f5bbbc0c"} Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.282974 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.284065 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.284105 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.284121 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.286798 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2cf827947c686099ca3c6afad51d866f4ee1d557bc64cc1c70f6213fd4198df2"} Mar 12 13:09:44 crc kubenswrapper[4778]: I0312 13:09:44.286844 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"62d772ee1ff9d986b4311494a08c8763bd91704fda6cd9c6f067c98205a4067d"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.197811 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:45 crc kubenswrapper[4778]: E0312 13:09:45.204545 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="3.2s" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.290593 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.290590 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"981038910a82f8dc9ffff22e601a748571a56541b59c187d9ce4f5d500febd58"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.290708 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c2656063b4947b28fa0ac1759e349c80fc039346869b1c1d6daad75e93ad407"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.290722 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bfcd0839d0f910ecfd92ecc2db64e4ef06fd90bfda52f24a751f8bf1cf112d8c"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.291494 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.291521 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.291530 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.293721 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f5918a46253f4d68b9bc62ba4357dd2ae6baff245e6b4ca06e44eb7e9b7af9df"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.293752 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8141466f1a3447b31eeaeb92f1b2ac9e8ddef4ba3e9a60f2ce9a775c3cce0a5b"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.293766 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.294554 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.294602 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.294620 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.306791 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.306851 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.306871 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.306891 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.308637 4778 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de" exitCode=0 Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.308699 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.308711 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.309355 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.309378 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.309386 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.311873 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c"} Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.311971 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.312907 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.312934 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.312953 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.458322 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.459497 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.459534 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.459545 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:45 crc kubenswrapper[4778]: I0312 13:09:45.459569 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:09:45 crc kubenswrapper[4778]: E0312 13:09:45.460028 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.32:6443: connect: connection refused" node="crc" Mar 12 13:09:45 crc kubenswrapper[4778]: W0312 13:09:45.692414 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:45 crc kubenswrapper[4778]: E0312 13:09:45.692510 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:45 crc kubenswrapper[4778]: W0312 13:09:45.878599 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.32:6443: connect: connection refused Mar 12 13:09:45 crc kubenswrapper[4778]: E0312 13:09:45.878700 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.32:6443: connect: connection refused" logger="UnhandledError" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.319126 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"096e595c20026e7987447e330ca68a13e51edc53903377d3649a915efba351ab"} Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.319491 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.321038 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.321092 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.321117 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.324846 4778 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250" exitCode=0 Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.324905 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250"} Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.324971 4778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.325015 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.325021 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.325016 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.326343 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.327286 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.327335 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.327353 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.328369 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.328414 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.328434 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.329143 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.329227 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.329253 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.330271 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.330321 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.330339 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:46 crc kubenswrapper[4778]: I0312 13:09:46.829165 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.203457 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.210148 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.332090 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9"} Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.332154 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.332213 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2"} Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.332230 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380"} Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.332241 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b"} Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.332154 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.332345 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.332848 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.333447 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.333474 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.333482 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.333577 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.333609 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.333626 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.334489 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.334522 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.334534 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:47 crc kubenswrapper[4778]: I0312 13:09:47.419763 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.108397 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.302451 4778 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.338805 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21"} Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.338889 4778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.338956 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.338974 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.339003 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340565 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340608 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340632 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340736 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340778 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340778 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340825 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340793 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.340867 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.420413 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.435990 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.661031 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.662676 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.662730 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.662741 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:48 crc kubenswrapper[4778]: I0312 13:09:48.662771 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.341744 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.341845 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.341845 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.343131 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.343263 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.343297 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.343668 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.344355 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.344388 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.344765 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.344795 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:49 crc kubenswrapper[4778]: I0312 13:09:49.344804 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.344278 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.345070 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.345099 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.345107 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.622529 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.622713 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.623866 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.623905 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:50 crc kubenswrapper[4778]: I0312 13:09:50.623914 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:52 crc kubenswrapper[4778]: I0312 13:09:52.188566 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:52 crc kubenswrapper[4778]: I0312 13:09:52.188812 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:52 crc kubenswrapper[4778]: I0312 13:09:52.190642 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:52 crc kubenswrapper[4778]: I0312 13:09:52.190711 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:52 crc kubenswrapper[4778]: I0312 13:09:52.190730 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:52 crc kubenswrapper[4778]: E0312 13:09:52.355273 4778 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 13:09:55 crc kubenswrapper[4778]: I0312 13:09:55.189014 4778 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 13:09:55 crc kubenswrapper[4778]: I0312 13:09:55.189107 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:09:55 crc kubenswrapper[4778]: W0312 13:09:55.997520 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:55Z is after 2026-02-23T05:33:13Z Mar 12 13:09:55 crc kubenswrapper[4778]: E0312 13:09:55.997771 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:55Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:09:55 crc kubenswrapper[4778]: I0312 13:09:55.998381 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:55Z is after 2026-02-23T05:33:13Z Mar 12 13:09:55 crc kubenswrapper[4778]: E0312 13:09:55.999009 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:55Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 13:09:56 crc kubenswrapper[4778]: W0312 13:09:56.001079 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z Mar 12 13:09:56 crc kubenswrapper[4778]: E0312 13:09:56.001172 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:09:56 crc kubenswrapper[4778]: W0312 13:09:56.006913 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z Mar 12 13:09:56 crc kubenswrapper[4778]: E0312 13:09:56.007001 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:09:56 crc kubenswrapper[4778]: W0312 13:09:56.009392 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z Mar 12 13:09:56 crc kubenswrapper[4778]: E0312 13:09:56.009460 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.012496 4778 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.012659 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 12 13:09:56 crc kubenswrapper[4778]: E0312 13:09:56.012898 4778 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:09:56 crc kubenswrapper[4778]: E0312 13:09:56.013123 4778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c1a09b06f665b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,LastTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:09:56 crc kubenswrapper[4778]: E0312 13:09:56.015415 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.027820 4778 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.027879 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.161708 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.161893 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.162954 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.162998 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.163011 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.200159 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:56Z is after 2026-02-23T05:33:13Z Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.200700 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.358599 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.360431 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="096e595c20026e7987447e330ca68a13e51edc53903377d3649a915efba351ab" exitCode=255 Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.360543 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.360564 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"096e595c20026e7987447e330ca68a13e51edc53903377d3649a915efba351ab"} Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.360721 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.361253 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.361320 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.361344 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.361569 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.361596 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.361606 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.362158 4778 scope.go:117] "RemoveContainer" containerID="096e595c20026e7987447e330ca68a13e51edc53903377d3649a915efba351ab" Mar 12 13:09:56 crc kubenswrapper[4778]: I0312 13:09:56.374839 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.199338 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:57Z is after 2026-02-23T05:33:13Z Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.365539 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.366499 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.368566 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df" exitCode=255 Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.368607 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df"} Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.368824 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.368917 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.368830 4778 scope.go:117] "RemoveContainer" containerID="096e595c20026e7987447e330ca68a13e51edc53903377d3649a915efba351ab" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.369861 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.369895 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.369906 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.370584 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.370620 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.370630 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:57 crc kubenswrapper[4778]: I0312 13:09:57.371283 4778 scope.go:117] "RemoveContainer" containerID="37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df" Mar 12 13:09:57 crc kubenswrapper[4778]: E0312 13:09:57.371519 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.029503 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.117434 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.237647 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:58Z is after 2026-02-23T05:33:13Z Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.372605 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.374284 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.375043 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.375080 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.375090 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.375681 4778 scope.go:117] "RemoveContainer" containerID="37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df" Mar 12 13:09:58 crc kubenswrapper[4778]: E0312 13:09:58.375910 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.379058 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.425505 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.425673 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.427023 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.427066 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:58 crc kubenswrapper[4778]: I0312 13:09:58.427077 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:59 crc kubenswrapper[4778]: I0312 13:09:59.202706 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:09:59Z is after 2026-02-23T05:33:13Z Mar 12 13:09:59 crc kubenswrapper[4778]: I0312 13:09:59.377005 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:09:59 crc kubenswrapper[4778]: I0312 13:09:59.378088 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:09:59 crc kubenswrapper[4778]: I0312 13:09:59.378156 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:09:59 crc kubenswrapper[4778]: I0312 13:09:59.378214 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:09:59 crc kubenswrapper[4778]: I0312 13:09:59.379096 4778 scope.go:117] "RemoveContainer" containerID="37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df" Mar 12 13:09:59 crc kubenswrapper[4778]: E0312 13:09:59.379418 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:00 crc kubenswrapper[4778]: I0312 13:10:00.201040 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:00Z is after 2026-02-23T05:33:13Z Mar 12 13:10:00 crc kubenswrapper[4778]: I0312 13:10:00.379303 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:00 crc kubenswrapper[4778]: I0312 13:10:00.380516 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:00 crc kubenswrapper[4778]: I0312 13:10:00.380555 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:00 crc kubenswrapper[4778]: I0312 13:10:00.380572 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:00 crc kubenswrapper[4778]: I0312 13:10:00.381553 4778 scope.go:117] "RemoveContainer" containerID="37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df" Mar 12 13:10:00 crc kubenswrapper[4778]: E0312 13:10:00.381834 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:00 crc kubenswrapper[4778]: W0312 13:10:00.595558 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:00Z is after 2026-02-23T05:33:13Z Mar 12 13:10:00 crc kubenswrapper[4778]: E0312 13:10:00.595704 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:10:01 crc kubenswrapper[4778]: I0312 13:10:01.199968 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:01Z is after 2026-02-23T05:33:13Z Mar 12 13:10:02 crc kubenswrapper[4778]: I0312 13:10:02.200257 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:02Z is after 2026-02-23T05:33:13Z Mar 12 13:10:02 crc kubenswrapper[4778]: W0312 13:10:02.282637 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:02Z is after 2026-02-23T05:33:13Z Mar 12 13:10:02 crc kubenswrapper[4778]: E0312 13:10:02.282741 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:10:02 crc kubenswrapper[4778]: E0312 13:10:02.355944 4778 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 13:10:02 crc kubenswrapper[4778]: I0312 13:10:02.399763 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:02 crc kubenswrapper[4778]: I0312 13:10:02.401133 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:02 crc kubenswrapper[4778]: I0312 13:10:02.401208 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:02 crc kubenswrapper[4778]: I0312 13:10:02.401223 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:02 crc kubenswrapper[4778]: I0312 13:10:02.401253 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:10:02 crc kubenswrapper[4778]: E0312 13:10:02.405600 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:02Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 13:10:02 crc kubenswrapper[4778]: E0312 13:10:02.420323 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:02Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 12 13:10:03 crc kubenswrapper[4778]: I0312 13:10:03.202399 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:03Z is after 2026-02-23T05:33:13Z Mar 12 13:10:03 crc kubenswrapper[4778]: I0312 13:10:03.534518 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:10:03 crc kubenswrapper[4778]: I0312 13:10:03.534753 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:03 crc kubenswrapper[4778]: I0312 13:10:03.536481 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:03 crc kubenswrapper[4778]: I0312 13:10:03.536557 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:03 crc kubenswrapper[4778]: I0312 13:10:03.536582 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:03 crc kubenswrapper[4778]: I0312 13:10:03.537904 4778 scope.go:117] "RemoveContainer" containerID="37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df" Mar 12 13:10:03 crc kubenswrapper[4778]: E0312 13:10:03.538262 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:04 crc kubenswrapper[4778]: I0312 13:10:04.133958 4778 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 13:10:04 crc kubenswrapper[4778]: E0312 13:10:04.137650 4778 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:10:04 crc kubenswrapper[4778]: I0312 13:10:04.200901 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:04Z is after 2026-02-23T05:33:13Z Mar 12 13:10:05 crc kubenswrapper[4778]: W0312 13:10:05.160343 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:05Z is after 2026-02-23T05:33:13Z Mar 12 13:10:05 crc kubenswrapper[4778]: E0312 13:10:05.160453 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:10:05 crc kubenswrapper[4778]: I0312 13:10:05.189527 4778 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 13:10:05 crc kubenswrapper[4778]: I0312 13:10:05.189643 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 13:10:05 crc kubenswrapper[4778]: I0312 13:10:05.199964 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:05Z is after 2026-02-23T05:33:13Z Mar 12 13:10:06 crc kubenswrapper[4778]: E0312 13:10:06.019937 4778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:06Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c1a09b06f665b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,LastTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:06 crc kubenswrapper[4778]: W0312 13:10:06.177960 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:06Z is after 2026-02-23T05:33:13Z Mar 12 13:10:06 crc kubenswrapper[4778]: E0312 13:10:06.178057 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:10:06 crc kubenswrapper[4778]: I0312 13:10:06.202297 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:06Z is after 2026-02-23T05:33:13Z Mar 12 13:10:07 crc kubenswrapper[4778]: I0312 13:10:07.202797 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:07Z is after 2026-02-23T05:33:13Z Mar 12 13:10:08 crc kubenswrapper[4778]: I0312 13:10:08.202324 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:08Z is after 2026-02-23T05:33:13Z Mar 12 13:10:09 crc kubenswrapper[4778]: I0312 13:10:09.200535 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:09Z is after 2026-02-23T05:33:13Z Mar 12 13:10:09 crc kubenswrapper[4778]: I0312 13:10:09.406348 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:09 crc kubenswrapper[4778]: I0312 13:10:09.408171 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:09 crc kubenswrapper[4778]: I0312 13:10:09.408244 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:09 crc kubenswrapper[4778]: I0312 13:10:09.408257 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:09 crc kubenswrapper[4778]: I0312 13:10:09.408285 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:10:09 crc kubenswrapper[4778]: E0312 13:10:09.413246 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:09Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 13:10:09 crc kubenswrapper[4778]: E0312 13:10:09.426862 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:09Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 12 13:10:10 crc kubenswrapper[4778]: I0312 13:10:10.200955 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:10Z is after 2026-02-23T05:33:13Z Mar 12 13:10:11 crc kubenswrapper[4778]: I0312 13:10:11.202466 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:11Z is after 2026-02-23T05:33:13Z Mar 12 13:10:12 crc kubenswrapper[4778]: I0312 13:10:12.200131 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:12Z is after 2026-02-23T05:33:13Z Mar 12 13:10:12 crc kubenswrapper[4778]: E0312 13:10:12.356683 4778 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 13:10:12 crc kubenswrapper[4778]: W0312 13:10:12.981516 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:12Z is after 2026-02-23T05:33:13Z Mar 12 13:10:12 crc kubenswrapper[4778]: E0312 13:10:12.981602 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:10:13 crc kubenswrapper[4778]: I0312 13:10:13.203073 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:13Z is after 2026-02-23T05:33:13Z Mar 12 13:10:13 crc kubenswrapper[4778]: W0312 13:10:13.675557 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:13Z is after 2026-02-23T05:33:13Z Mar 12 13:10:13 crc kubenswrapper[4778]: E0312 13:10:13.676364 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 12 13:10:14 crc kubenswrapper[4778]: I0312 13:10:14.199940 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:14Z is after 2026-02-23T05:33:13Z Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.131123 4778 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:47382->192.168.126.11:10357: read: connection reset by peer" start-of-body= Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.131227 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:47382->192.168.126.11:10357: read: connection reset by peer" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.131307 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.131499 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.132955 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.132987 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.133004 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.133690 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"2cf827947c686099ca3c6afad51d866f4ee1d557bc64cc1c70f6213fd4198df2"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.133908 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://2cf827947c686099ca3c6afad51d866f4ee1d557bc64cc1c70f6213fd4198df2" gracePeriod=30 Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.200566 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:15Z is after 2026-02-23T05:33:13Z Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.423786 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.424354 4778 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2cf827947c686099ca3c6afad51d866f4ee1d557bc64cc1c70f6213fd4198df2" exitCode=255 Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.424422 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2cf827947c686099ca3c6afad51d866f4ee1d557bc64cc1c70f6213fd4198df2"} Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.424463 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc7b2a12646c299c75286fc95cf2a8fa35fd83031ce3daebec42030d966274ae"} Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.424551 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.425253 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.425278 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:15 crc kubenswrapper[4778]: I0312 13:10:15.425288 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:16 crc kubenswrapper[4778]: E0312 13:10:16.024601 4778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:16Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189c1a09b06f665b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,LastTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:16 crc kubenswrapper[4778]: I0312 13:10:16.201021 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:16Z is after 2026-02-23T05:33:13Z Mar 12 13:10:16 crc kubenswrapper[4778]: I0312 13:10:16.413672 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:16 crc kubenswrapper[4778]: I0312 13:10:16.414799 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:16 crc kubenswrapper[4778]: I0312 13:10:16.414863 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:16 crc kubenswrapper[4778]: I0312 13:10:16.414875 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:16 crc kubenswrapper[4778]: I0312 13:10:16.414900 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:10:16 crc kubenswrapper[4778]: E0312 13:10:16.418296 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:16Z is after 2026-02-23T05:33:13Z" node="crc" Mar 12 13:10:16 crc kubenswrapper[4778]: E0312 13:10:16.430298 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:16Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 12 13:10:17 crc kubenswrapper[4778]: I0312 13:10:17.202311 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:17 crc kubenswrapper[4778]: I0312 13:10:17.253304 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:17 crc kubenswrapper[4778]: I0312 13:10:17.254907 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:17 crc kubenswrapper[4778]: I0312 13:10:17.254969 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:17 crc kubenswrapper[4778]: I0312 13:10:17.254989 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:17 crc kubenswrapper[4778]: I0312 13:10:17.255864 4778 scope.go:117] "RemoveContainer" containerID="37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.201645 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.433459 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.435146 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d1bfe956cf856eb5d8ec2a24a00da9f1dfcad215016a56650f0b3dec0dffaa4d"} Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.435434 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.436132 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.436284 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.436722 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.436769 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.436786 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.437116 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.437143 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:18 crc kubenswrapper[4778]: I0312 13:10:18.437153 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.202161 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:19 crc kubenswrapper[4778]: W0312 13:10:19.432767 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 12 13:10:19 crc kubenswrapper[4778]: E0312 13:10:19.432845 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.440460 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.441426 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.444914 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d1bfe956cf856eb5d8ec2a24a00da9f1dfcad215016a56650f0b3dec0dffaa4d" exitCode=255 Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.444968 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d1bfe956cf856eb5d8ec2a24a00da9f1dfcad215016a56650f0b3dec0dffaa4d"} Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.445053 4778 scope.go:117] "RemoveContainer" containerID="37e5f37a4b11b4c76513fd0ebf7036a8c7d1c8547248167f22dc4b34cfae47df" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.445296 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.446711 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.446780 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.446809 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:19 crc kubenswrapper[4778]: I0312 13:10:19.447736 4778 scope.go:117] "RemoveContainer" containerID="d1bfe956cf856eb5d8ec2a24a00da9f1dfcad215016a56650f0b3dec0dffaa4d" Mar 12 13:10:19 crc kubenswrapper[4778]: E0312 13:10:19.448101 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:20 crc kubenswrapper[4778]: I0312 13:10:20.204830 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:20 crc kubenswrapper[4778]: I0312 13:10:20.450575 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 13:10:21 crc kubenswrapper[4778]: I0312 13:10:21.128136 4778 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 12 13:10:21 crc kubenswrapper[4778]: I0312 13:10:21.148633 4778 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 13:10:21 crc kubenswrapper[4778]: I0312 13:10:21.205529 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:22 crc kubenswrapper[4778]: I0312 13:10:22.188904 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:10:22 crc kubenswrapper[4778]: I0312 13:10:22.189262 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:22 crc kubenswrapper[4778]: I0312 13:10:22.191096 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:22 crc kubenswrapper[4778]: I0312 13:10:22.191159 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:22 crc kubenswrapper[4778]: I0312 13:10:22.191197 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:22 crc kubenswrapper[4778]: I0312 13:10:22.204937 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:22 crc kubenswrapper[4778]: E0312 13:10:22.356833 4778 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.203072 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.418745 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.420479 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.420553 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.420576 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.420621 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:10:23 crc kubenswrapper[4778]: E0312 13:10:23.425896 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 13:10:23 crc kubenswrapper[4778]: E0312 13:10:23.436900 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.534992 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.535331 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.536912 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.536980 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.536996 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:23 crc kubenswrapper[4778]: I0312 13:10:23.537726 4778 scope.go:117] "RemoveContainer" containerID="d1bfe956cf856eb5d8ec2a24a00da9f1dfcad215016a56650f0b3dec0dffaa4d" Mar 12 13:10:23 crc kubenswrapper[4778]: E0312 13:10:23.537942 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:24 crc kubenswrapper[4778]: I0312 13:10:24.206444 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:25 crc kubenswrapper[4778]: I0312 13:10:25.189907 4778 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 13:10:25 crc kubenswrapper[4778]: I0312 13:10:25.189995 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 13:10:25 crc kubenswrapper[4778]: I0312 13:10:25.203652 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.030717 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b06f665b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,LastTimestamp:2026-03-12 13:09:42.194251355 +0000 UTC m=+0.642946971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.035255 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c5643c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,LastTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.040096 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c6c56f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,LastTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.045016 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c7f7c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250387393 +0000 UTC m=+0.699082799,LastTimestamp:2026-03-12 13:09:42.250387393 +0000 UTC m=+0.699082799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.050174 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b9f3e256 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.35392879 +0000 UTC m=+0.802624186,LastTimestamp:2026-03-12 13:09:42.35392879 +0000 UTC m=+0.802624186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.057669 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c5643c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c5643c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,LastTimestamp:2026-03-12 13:09:42.354787306 +0000 UTC m=+0.803482702,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.064863 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c6c56f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c6c56f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,LastTimestamp:2026-03-12 13:09:42.354803178 +0000 UTC m=+0.803498574,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.071772 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c7f7c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c7f7c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250387393 +0000 UTC m=+0.699082799,LastTimestamp:2026-03-12 13:09:42.354811408 +0000 UTC m=+0.803506804,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.075729 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c5643c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c5643c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,LastTimestamp:2026-03-12 13:09:42.355802648 +0000 UTC m=+0.804498044,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.079909 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c6c56f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c6c56f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,LastTimestamp:2026-03-12 13:09:42.35581722 +0000 UTC m=+0.804512616,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.082948 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c7f7c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c7f7c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250387393 +0000 UTC m=+0.699082799,LastTimestamp:2026-03-12 13:09:42.355827061 +0000 UTC m=+0.804522457,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.086985 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c5643c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c5643c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,LastTimestamp:2026-03-12 13:09:42.356463415 +0000 UTC m=+0.805158811,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.091357 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c6c56f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c6c56f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,LastTimestamp:2026-03-12 13:09:42.356477246 +0000 UTC m=+0.805172642,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.096412 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c7f7c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c7f7c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250387393 +0000 UTC m=+0.699082799,LastTimestamp:2026-03-12 13:09:42.356484557 +0000 UTC m=+0.805179953,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.101335 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c5643c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c5643c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,LastTimestamp:2026-03-12 13:09:42.356803539 +0000 UTC m=+0.805498975,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.106889 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c6c56f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c6c56f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,LastTimestamp:2026-03-12 13:09:42.356832252 +0000 UTC m=+0.805527678,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.112650 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c7f7c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c7f7c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250387393 +0000 UTC m=+0.699082799,LastTimestamp:2026-03-12 13:09:42.356847553 +0000 UTC m=+0.805542979,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.117271 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c5643c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c5643c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,LastTimestamp:2026-03-12 13:09:42.356957044 +0000 UTC m=+0.805652450,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.122045 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c6c56f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c6c56f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,LastTimestamp:2026-03-12 13:09:42.35701414 +0000 UTC m=+0.805709546,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.126846 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c7f7c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c7f7c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250387393 +0000 UTC m=+0.699082799,LastTimestamp:2026-03-12 13:09:42.357024771 +0000 UTC m=+0.805720177,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.131160 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c5643c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c5643c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,LastTimestamp:2026-03-12 13:09:42.35850399 +0000 UTC m=+0.807199396,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.135870 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c6c56f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c6c56f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,LastTimestamp:2026-03-12 13:09:42.358516441 +0000 UTC m=+0.807211837,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.139985 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c7f7c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c7f7c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250387393 +0000 UTC m=+0.699082799,LastTimestamp:2026-03-12 13:09:42.358525592 +0000 UTC m=+0.807220988,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.144333 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c5643c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c5643c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250218556 +0000 UTC m=+0.698913972,LastTimestamp:2026-03-12 13:09:42.359345795 +0000 UTC m=+0.808041191,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.149241 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189c1a09b3c6c56f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189c1a09b3c6c56f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.250308975 +0000 UTC m=+0.699004391,LastTimestamp:2026-03-12 13:09:42.359356106 +0000 UTC m=+0.808051502,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.155177 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a09d2ec8392 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.772876178 +0000 UTC m=+1.221571574,LastTimestamp:2026-03-12 13:09:42.772876178 +0000 UTC m=+1.221571574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.162222 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a09d2f1f29c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.773232284 +0000 UTC m=+1.221927720,LastTimestamp:2026-03-12 13:09:42.773232284 +0000 UTC m=+1.221927720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.169711 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a09d342598a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.778501514 +0000 UTC m=+1.227196930,LastTimestamp:2026-03-12 13:09:42.778501514 +0000 UTC m=+1.227196930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.174588 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1a09d3b75462 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.786167906 +0000 UTC m=+1.234863302,LastTimestamp:2026-03-12 13:09:42.786167906 +0000 UTC m=+1.234863302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.179117 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a09d4bf7a2f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:42.803479087 +0000 UTC m=+1.252174493,LastTimestamp:2026-03-12 13:09:42.803479087 +0000 UTC m=+1.252174493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.187138 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a057ac0f1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.621058801 +0000 UTC m=+2.069754187,LastTimestamp:2026-03-12 13:09:43.621058801 +0000 UTC m=+2.069754187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.193437 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a05913be5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.622532069 +0000 UTC m=+2.071227475,LastTimestamp:2026-03-12 13:09:43.622532069 +0000 UTC m=+2.071227475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: I0312 13:10:26.198255 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.198336 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1a0a05c87177 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.626150263 +0000 UTC m=+2.074845669,LastTimestamp:2026-03-12 13:09:43.626150263 +0000 UTC m=+2.074845669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.201529 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0a05dcd64e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.627486798 +0000 UTC m=+2.076182204,LastTimestamp:2026-03-12 13:09:43.627486798 +0000 UTC m=+2.076182204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.205304 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a0612d4ca openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.631025354 +0000 UTC m=+2.079720760,LastTimestamp:2026-03-12 13:09:43.631025354 +0000 UTC m=+2.079720760,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.209792 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a0647c085 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.634493573 +0000 UTC m=+2.083188979,LastTimestamp:2026-03-12 13:09:43.634493573 +0000 UTC m=+2.083188979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.213856 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a06949447 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.639528519 +0000 UTC m=+2.088223915,LastTimestamp:2026-03-12 13:09:43.639528519 +0000 UTC m=+2.088223915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.218419 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1a0a06a91d05 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.640874245 +0000 UTC m=+2.089569651,LastTimestamp:2026-03-12 13:09:43.640874245 +0000 UTC m=+2.089569651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.223089 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0a06ae8b5a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.64123017 +0000 UTC m=+2.089925586,LastTimestamp:2026-03-12 13:09:43.64123017 +0000 UTC m=+2.089925586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.228388 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a06b97ba3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.641947043 +0000 UTC m=+2.090642449,LastTimestamp:2026-03-12 13:09:43.641947043 +0000 UTC m=+2.090642449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.233324 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a07f1b8e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.662409961 +0000 UTC m=+2.111105377,LastTimestamp:2026-03-12 13:09:43.662409961 +0000 UTC m=+2.111105377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.238380 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a196f74ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.955862783 +0000 UTC m=+2.404558219,LastTimestamp:2026-03-12 13:09:43.955862783 +0000 UTC m=+2.404558219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.242057 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a1af68cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.981493498 +0000 UTC m=+2.430188924,LastTimestamp:2026-03-12 13:09:43.981493498 +0000 UTC m=+2.430188924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.248936 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a1b100281 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.983161985 +0000 UTC m=+2.431857381,LastTimestamp:2026-03-12 13:09:43.983161985 +0000 UTC m=+2.431857381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.256136 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a2cb2a380 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.279032704 +0000 UTC m=+2.727728140,LastTimestamp:2026-03-12 13:09:44.279032704 +0000 UTC m=+2.727728140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.260565 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0a2cc23d39 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.280055097 +0000 UTC m=+2.728750533,LastTimestamp:2026-03-12 13:09:44.280055097 +0000 UTC m=+2.728750533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.266240 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a2cdef7ab openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.281937835 +0000 UTC m=+2.730633261,LastTimestamp:2026-03-12 13:09:44.281937835 +0000 UTC m=+2.730633261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.270664 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1a0a2cf61c46 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.283454534 +0000 UTC m=+2.732149950,LastTimestamp:2026-03-12 13:09:44.283454534 +0000 UTC m=+2.732149950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.275705 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a2d4e1965 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.289220965 +0000 UTC m=+2.737916371,LastTimestamp:2026-03-12 13:09:44.289220965 +0000 UTC m=+2.737916371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.281785 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a2e8c3641 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.310068801 +0000 UTC m=+2.758764207,LastTimestamp:2026-03-12 13:09:44.310068801 +0000 UTC m=+2.758764207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.283479 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a2ea66ed5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.311787221 +0000 UTC m=+2.760482627,LastTimestamp:2026-03-12 13:09:44.311787221 +0000 UTC m=+2.760482627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.288258 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0a3a407626 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.506431014 +0000 UTC m=+2.955126410,LastTimestamp:2026-03-12 13:09:44.506431014 +0000 UTC m=+2.955126410,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.292910 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a3a495e9e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.507014814 +0000 UTC m=+2.955710210,LastTimestamp:2026-03-12 13:09:44.507014814 +0000 UTC m=+2.955710210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.297589 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1a0a3a4b53bd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.507143101 +0000 UTC m=+2.955838497,LastTimestamp:2026-03-12 13:09:44.507143101 +0000 UTC m=+2.955838497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.302343 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a3a506350 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.507474768 +0000 UTC m=+2.956170164,LastTimestamp:2026-03-12 13:09:44.507474768 +0000 UTC m=+2.956170164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.306942 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a3a53868f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.507680399 +0000 UTC m=+2.956375795,LastTimestamp:2026-03-12 13:09:44.507680399 +0000 UTC m=+2.956375795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.312042 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a3b88d1af openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.527950255 +0000 UTC m=+2.976645651,LastTimestamp:2026-03-12 13:09:44.527950255 +0000 UTC m=+2.976645651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.315664 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a3b99a073 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.529051763 +0000 UTC m=+2.977747159,LastTimestamp:2026-03-12 13:09:44.529051763 +0000 UTC m=+2.977747159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.321900 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189c1a0a3c134a22 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.537025058 +0000 UTC m=+2.985720454,LastTimestamp:2026-03-12 13:09:44.537025058 +0000 UTC m=+2.985720454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.334978 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0a3c145eca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.537095882 +0000 UTC m=+2.985791278,LastTimestamp:2026-03-12 13:09:44.537095882 +0000 UTC m=+2.985791278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.343407 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a3c160604 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.537204228 +0000 UTC m=+2.985899624,LastTimestamp:2026-03-12 13:09:44.537204228 +0000 UTC m=+2.985899624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.351665 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a3c160f1e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.537206558 +0000 UTC m=+2.985901954,LastTimestamp:2026-03-12 13:09:44.537206558 +0000 UTC m=+2.985901954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.357473 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a3c2d3201 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.538722817 +0000 UTC m=+2.987418233,LastTimestamp:2026-03-12 13:09:44.538722817 +0000 UTC m=+2.987418233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.364314 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a4592aaf6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.696367862 +0000 UTC m=+3.145063258,LastTimestamp:2026-03-12 13:09:44.696367862 +0000 UTC m=+3.145063258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.373092 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a45c620c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.699740358 +0000 UTC m=+3.148435754,LastTimestamp:2026-03-12 13:09:44.699740358 +0000 UTC m=+3.148435754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.378642 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a467e4df3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.711810547 +0000 UTC m=+3.160505953,LastTimestamp:2026-03-12 13:09:44.711810547 +0000 UTC m=+3.160505953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.384134 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a468c0832 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.712710194 +0000 UTC m=+3.161405590,LastTimestamp:2026-03-12 13:09:44.712710194 +0000 UTC m=+3.161405590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.389966 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a4709016c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.72090046 +0000 UTC m=+3.169595856,LastTimestamp:2026-03-12 13:09:44.72090046 +0000 UTC m=+3.169595856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.395767 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a471795d8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.72185596 +0000 UTC m=+3.170551356,LastTimestamp:2026-03-12 13:09:44.72185596 +0000 UTC m=+3.170551356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.401634 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a51622f10 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.894517008 +0000 UTC m=+3.343212404,LastTimestamp:2026-03-12 13:09:44.894517008 +0000 UTC m=+3.343212404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.406576 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a516601a1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.894767521 +0000 UTC m=+3.343462917,LastTimestamp:2026-03-12 13:09:44.894767521 +0000 UTC m=+3.343462917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.411292 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189c1a0a5256eec0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.910556864 +0000 UTC m=+3.359252250,LastTimestamp:2026-03-12 13:09:44.910556864 +0000 UTC m=+3.359252250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.415658 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a526bc931 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.911923505 +0000 UTC m=+3.360618891,LastTimestamp:2026-03-12 13:09:44.911923505 +0000 UTC m=+3.360618891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.420171 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a527b131b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:44.912925467 +0000 UTC m=+3.361620863,LastTimestamp:2026-03-12 13:09:44.912925467 +0000 UTC m=+3.361620863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.426989 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a5fc2b160 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.135722848 +0000 UTC m=+3.584418244,LastTimestamp:2026-03-12 13:09:45.135722848 +0000 UTC m=+3.584418244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.432716 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a608f1168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.149116776 +0000 UTC m=+3.597812172,LastTimestamp:2026-03-12 13:09:45.149116776 +0000 UTC m=+3.597812172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.439609 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a60a53c9f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.150569631 +0000 UTC m=+3.599265027,LastTimestamp:2026-03-12 13:09:45.150569631 +0000 UTC m=+3.599265027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.444812 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0a6a31e74e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.31078331 +0000 UTC m=+3.759478706,LastTimestamp:2026-03-12 13:09:45.31078331 +0000 UTC m=+3.759478706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.451525 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a6beee97e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.33994739 +0000 UTC m=+3.788642786,LastTimestamp:2026-03-12 13:09:45.33994739 +0000 UTC m=+3.788642786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.458944 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a6cd6e5d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.355150801 +0000 UTC m=+3.803846197,LastTimestamp:2026-03-12 13:09:45.355150801 +0000 UTC m=+3.803846197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.465577 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0a7815cad1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.543822033 +0000 UTC m=+3.992517429,LastTimestamp:2026-03-12 13:09:45.543822033 +0000 UTC m=+3.992517429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.473482 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0a79948cc9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.568906441 +0000 UTC m=+4.017601837,LastTimestamp:2026-03-12 13:09:45.568906441 +0000 UTC m=+4.017601837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.482447 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0aa714b647 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.332280391 +0000 UTC m=+4.780975827,LastTimestamp:2026-03-12 13:09:46.332280391 +0000 UTC m=+4.780975827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.489040 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0ab151033a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.50400441 +0000 UTC m=+4.952699806,LastTimestamp:2026-03-12 13:09:46.50400441 +0000 UTC m=+4.952699806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.494682 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0ab21c20fa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.517315834 +0000 UTC m=+4.966011230,LastTimestamp:2026-03-12 13:09:46.517315834 +0000 UTC m=+4.966011230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.501197 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0ab22eeac2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.518547138 +0000 UTC m=+4.967242534,LastTimestamp:2026-03-12 13:09:46.518547138 +0000 UTC m=+4.967242534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.507315 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0abceacfcb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.698633163 +0000 UTC m=+5.147328559,LastTimestamp:2026-03-12 13:09:46.698633163 +0000 UTC m=+5.147328559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.512382 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0abe0ad0fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.717507836 +0000 UTC m=+5.166203222,LastTimestamp:2026-03-12 13:09:46.717507836 +0000 UTC m=+5.166203222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.518830 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0abe1fdad0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.718886608 +0000 UTC m=+5.167582004,LastTimestamp:2026-03-12 13:09:46.718886608 +0000 UTC m=+5.167582004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.523030 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0acbaf2c42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.946382914 +0000 UTC m=+5.395078350,LastTimestamp:2026-03-12 13:09:46.946382914 +0000 UTC m=+5.395078350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.526943 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0acc9757db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.961598427 +0000 UTC m=+5.410293843,LastTimestamp:2026-03-12 13:09:46.961598427 +0000 UTC m=+5.410293843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.531214 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0accb38f8d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:46.963447693 +0000 UTC m=+5.412143129,LastTimestamp:2026-03-12 13:09:46.963447693 +0000 UTC m=+5.412143129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.535233 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0ada258c5c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:47.189021788 +0000 UTC m=+5.637717194,LastTimestamp:2026-03-12 13:09:47.189021788 +0000 UTC m=+5.637717194,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.541091 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0adb37edfd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:47.207003645 +0000 UTC m=+5.655699081,LastTimestamp:2026-03-12 13:09:47.207003645 +0000 UTC m=+5.655699081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.545536 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0adb5277c8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:47.208742856 +0000 UTC m=+5.657438302,LastTimestamp:2026-03-12 13:09:47.208742856 +0000 UTC m=+5.657438302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.549356 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0ae761d5da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:47.41107657 +0000 UTC m=+5.859771966,LastTimestamp:2026-03-12 13:09:47.41107657 +0000 UTC m=+5.859771966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.552470 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189c1a0ae80be07a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:47.42222041 +0000 UTC m=+5.870915806,LastTimestamp:2026-03-12 13:09:47.42222041 +0000 UTC m=+5.870915806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.556943 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 13:10:26 crc kubenswrapper[4778]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1a0cb6fcc016 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 12 13:10:26 crc kubenswrapper[4778]: body: Mar 12 13:10:26 crc kubenswrapper[4778]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:55.189080086 +0000 UTC m=+13.637775492,LastTimestamp:2026-03-12 13:09:55.189080086 +0000 UTC m=+13.637775492,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 13:10:26 crc kubenswrapper[4778]: > Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.561144 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0cb6fdddec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:55.18915326 +0000 UTC m=+13.637848676,LastTimestamp:2026-03-12 13:09:55.18915326 +0000 UTC m=+13.637848676,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.564329 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 13:10:26 crc kubenswrapper[4778]: &Event{ObjectMeta:{kube-apiserver-crc.189c1a0ce8134087 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 12 13:10:26 crc kubenswrapper[4778]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 13:10:26 crc kubenswrapper[4778]: Mar 12 13:10:26 crc kubenswrapper[4778]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:56.012638343 +0000 UTC m=+14.461333749,LastTimestamp:2026-03-12 13:09:56.012638343 +0000 UTC m=+14.461333749,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 13:10:26 crc kubenswrapper[4778]: > Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.567314 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0ce814c327 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:56.012737319 +0000 UTC m=+14.461432725,LastTimestamp:2026-03-12 13:09:56.012737319 +0000 UTC m=+14.461432725,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.570469 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1a0ce8134087\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 12 13:10:26 crc kubenswrapper[4778]: &Event{ObjectMeta:{kube-apiserver-crc.189c1a0ce8134087 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 12 13:10:26 crc kubenswrapper[4778]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 12 13:10:26 crc kubenswrapper[4778]: Mar 12 13:10:26 crc kubenswrapper[4778]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:56.012638343 +0000 UTC m=+14.461333749,LastTimestamp:2026-03-12 13:09:56.027862377 +0000 UTC m=+14.476557783,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 13:10:26 crc kubenswrapper[4778]: > Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.573658 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1a0ce814c327\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0ce814c327 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:56.012737319 +0000 UTC m=+14.461432725,LastTimestamp:2026-03-12 13:09:56.027908029 +0000 UTC m=+14.476603435,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.577157 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1a0a60a53c9f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a60a53c9f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.150569631 +0000 UTC m=+3.599265027,LastTimestamp:2026-03-12 13:09:56.362921586 +0000 UTC m=+14.811616982,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.581076 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1a0a6beee97e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a6beee97e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.33994739 +0000 UTC m=+3.788642786,LastTimestamp:2026-03-12 13:09:56.537305644 +0000 UTC m=+14.986001040,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.584389 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189c1a0a6cd6e5d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189c1a0a6cd6e5d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:45.355150801 +0000 UTC m=+3.803846197,LastTimestamp:2026-03-12 13:09:56.543341638 +0000 UTC m=+14.992037034,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.588661 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 13:10:26 crc kubenswrapper[4778]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1a0f0b10ac88 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 13:10:26 crc kubenswrapper[4778]: body: Mar 12 13:10:26 crc kubenswrapper[4778]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:10:05.189606536 +0000 UTC m=+23.638301972,LastTimestamp:2026-03-12 13:10:05.189606536 +0000 UTC m=+23.638301972,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 13:10:26 crc kubenswrapper[4778]: > Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.591805 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0f0b12176e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:10:05.189699438 +0000 UTC m=+23.638394874,LastTimestamp:2026-03-12 13:10:05.189699438 +0000 UTC m=+23.638394874,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.593214 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 13:10:26 crc kubenswrapper[4778]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1a115ba16f46 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:47382->192.168.126.11:10357: read: connection reset by peer Mar 12 13:10:26 crc kubenswrapper[4778]: body: Mar 12 13:10:26 crc kubenswrapper[4778]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:10:15.131205446 +0000 UTC m=+33.579900882,LastTimestamp:2026-03-12 13:10:15.131205446 +0000 UTC m=+33.579900882,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 13:10:26 crc kubenswrapper[4778]: > Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.595597 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a115ba26d12 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:47382->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:10:15.131270418 +0000 UTC m=+33.579965854,LastTimestamp:2026-03-12 13:10:15.131270418 +0000 UTC m=+33.579965854,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.598594 4778 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a115bca7578 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:10:15.133894008 +0000 UTC m=+33.582589444,LastTimestamp:2026-03-12 13:10:15.133894008 +0000 UTC m=+33.582589444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.601918 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1a0a0647c085\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a0647c085 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.634493573 +0000 UTC m=+2.083188979,LastTimestamp:2026-03-12 13:10:15.165684508 +0000 UTC m=+33.614379944,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.605102 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1a0a196f74ff\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a196f74ff openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.955862783 +0000 UTC m=+2.404558219,LastTimestamp:2026-03-12 13:10:15.382984139 +0000 UTC m=+33.831679545,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.608323 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1a0a1af68cfa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0a1af68cfa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:09:43.981493498 +0000 UTC m=+2.430188924,LastTimestamp:2026-03-12 13:10:15.392822862 +0000 UTC m=+33.841518258,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.612817 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1a0f0b10ac88\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 12 13:10:26 crc kubenswrapper[4778]: &Event{ObjectMeta:{kube-controller-manager-crc.189c1a0f0b10ac88 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 12 13:10:26 crc kubenswrapper[4778]: body: Mar 12 13:10:26 crc kubenswrapper[4778]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:10:05.189606536 +0000 UTC m=+23.638301972,LastTimestamp:2026-03-12 13:10:25.18997446 +0000 UTC m=+43.638669886,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 12 13:10:26 crc kubenswrapper[4778]: > Mar 12 13:10:26 crc kubenswrapper[4778]: E0312 13:10:26.616861 4778 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189c1a0f0b12176e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189c1a0f0b12176e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:10:05.189699438 +0000 UTC m=+23.638394874,LastTimestamp:2026-03-12 13:10:25.190036202 +0000 UTC m=+43.638731628,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:10:27 crc kubenswrapper[4778]: I0312 13:10:27.203937 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:28 crc kubenswrapper[4778]: I0312 13:10:28.029302 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:10:28 crc kubenswrapper[4778]: I0312 13:10:28.029536 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:28 crc kubenswrapper[4778]: I0312 13:10:28.030707 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:28 crc kubenswrapper[4778]: I0312 13:10:28.030746 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:28 crc kubenswrapper[4778]: I0312 13:10:28.030761 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:28 crc kubenswrapper[4778]: I0312 13:10:28.031316 4778 scope.go:117] "RemoveContainer" containerID="d1bfe956cf856eb5d8ec2a24a00da9f1dfcad215016a56650f0b3dec0dffaa4d" Mar 12 13:10:28 crc kubenswrapper[4778]: E0312 13:10:28.031492 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:28 crc kubenswrapper[4778]: I0312 13:10:28.202236 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:28 crc kubenswrapper[4778]: W0312 13:10:28.569011 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:28 crc kubenswrapper[4778]: E0312 13:10:28.569094 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 13:10:29 crc kubenswrapper[4778]: I0312 13:10:29.203037 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:29 crc kubenswrapper[4778]: W0312 13:10:29.343416 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 12 13:10:29 crc kubenswrapper[4778]: E0312 13:10:29.343478 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 12 13:10:30 crc kubenswrapper[4778]: I0312 13:10:30.202064 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:30 crc kubenswrapper[4778]: I0312 13:10:30.426703 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:30 crc kubenswrapper[4778]: I0312 13:10:30.428291 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:30 crc kubenswrapper[4778]: I0312 13:10:30.428919 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:30 crc kubenswrapper[4778]: I0312 13:10:30.428947 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:30 crc kubenswrapper[4778]: I0312 13:10:30.428983 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:10:30 crc kubenswrapper[4778]: E0312 13:10:30.434971 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 13:10:30 crc kubenswrapper[4778]: E0312 13:10:30.443245 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 13:10:31 crc kubenswrapper[4778]: I0312 13:10:31.203607 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.193709 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.194066 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.195143 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.195176 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.195207 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.197440 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.200920 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:32 crc kubenswrapper[4778]: E0312 13:10:32.357143 4778 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.484251 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.485005 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.485068 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:32 crc kubenswrapper[4778]: I0312 13:10:32.485079 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:33 crc kubenswrapper[4778]: I0312 13:10:33.201594 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:34 crc kubenswrapper[4778]: I0312 13:10:34.203417 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:35 crc kubenswrapper[4778]: I0312 13:10:35.202616 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:36 crc kubenswrapper[4778]: I0312 13:10:36.204492 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:36 crc kubenswrapper[4778]: W0312 13:10:36.832326 4778 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 12 13:10:36 crc kubenswrapper[4778]: E0312 13:10:36.832679 4778 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 12 13:10:36 crc kubenswrapper[4778]: I0312 13:10:36.836170 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 12 13:10:36 crc kubenswrapper[4778]: I0312 13:10:36.836418 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:36 crc kubenswrapper[4778]: I0312 13:10:36.837742 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:36 crc kubenswrapper[4778]: I0312 13:10:36.837780 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:36 crc kubenswrapper[4778]: I0312 13:10:36.837789 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:37 crc kubenswrapper[4778]: I0312 13:10:37.204828 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:37 crc kubenswrapper[4778]: I0312 13:10:37.435945 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:37 crc kubenswrapper[4778]: I0312 13:10:37.437573 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:37 crc kubenswrapper[4778]: I0312 13:10:37.437680 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:37 crc kubenswrapper[4778]: I0312 13:10:37.437699 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:37 crc kubenswrapper[4778]: I0312 13:10:37.437733 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:10:37 crc kubenswrapper[4778]: E0312 13:10:37.443037 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 13:10:37 crc kubenswrapper[4778]: E0312 13:10:37.450808 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 13:10:38 crc kubenswrapper[4778]: I0312 13:10:38.204066 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:39 crc kubenswrapper[4778]: I0312 13:10:39.202225 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:40 crc kubenswrapper[4778]: I0312 13:10:40.203417 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:41 crc kubenswrapper[4778]: I0312 13:10:41.201373 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.203518 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.252940 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.254136 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.254454 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.254507 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.255136 4778 scope.go:117] "RemoveContainer" containerID="d1bfe956cf856eb5d8ec2a24a00da9f1dfcad215016a56650f0b3dec0dffaa4d" Mar 12 13:10:42 crc kubenswrapper[4778]: E0312 13:10:42.357331 4778 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.512251 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.514222 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd"} Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.514419 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.515609 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.515644 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:42 crc kubenswrapper[4778]: I0312 13:10:42.515654 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.201825 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.518678 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.519137 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.521480 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" exitCode=255 Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.521558 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd"} Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.521609 4778 scope.go:117] "RemoveContainer" containerID="d1bfe956cf856eb5d8ec2a24a00da9f1dfcad215016a56650f0b3dec0dffaa4d" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.521837 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.523982 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.524012 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.524021 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.524493 4778 scope.go:117] "RemoveContainer" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" Mar 12 13:10:43 crc kubenswrapper[4778]: E0312 13:10:43.524640 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:43 crc kubenswrapper[4778]: I0312 13:10:43.534947 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.203455 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.444117 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.445468 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.445503 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.445512 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.445534 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:10:44 crc kubenswrapper[4778]: E0312 13:10:44.450040 4778 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 12 13:10:44 crc kubenswrapper[4778]: E0312 13:10:44.455194 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.525347 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.526943 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.527667 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.527708 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.527722 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:44 crc kubenswrapper[4778]: I0312 13:10:44.528328 4778 scope.go:117] "RemoveContainer" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" Mar 12 13:10:44 crc kubenswrapper[4778]: E0312 13:10:44.528524 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:45 crc kubenswrapper[4778]: I0312 13:10:45.205405 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:46 crc kubenswrapper[4778]: I0312 13:10:46.200928 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:47 crc kubenswrapper[4778]: I0312 13:10:47.201455 4778 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 12 13:10:47 crc kubenswrapper[4778]: I0312 13:10:47.207452 4778 csr.go:261] certificate signing request csr-zkwnc is approved, waiting to be issued Mar 12 13:10:47 crc kubenswrapper[4778]: I0312 13:10:47.217109 4778 csr.go:257] certificate signing request csr-zkwnc is issued Mar 12 13:10:47 crc kubenswrapper[4778]: I0312 13:10:47.235891 4778 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.029709 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.029879 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.030903 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.030936 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.030945 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.031493 4778 scope.go:117] "RemoveContainer" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" Mar 12 13:10:48 crc kubenswrapper[4778]: E0312 13:10:48.031663 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.048723 4778 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.218114 4778 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-26 20:54:44.158263742 +0000 UTC Mar 12 13:10:48 crc kubenswrapper[4778]: I0312 13:10:48.218152 4778 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6223h43m55.940115094s for next certificate rotation Mar 12 13:10:49 crc kubenswrapper[4778]: I0312 13:10:49.877473 4778 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.450913 4778 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.452409 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.452450 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.452463 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.452587 4778 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.465079 4778 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.465389 4778 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.465418 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.469785 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.469831 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.469853 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.469884 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.469906 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:51Z","lastTransitionTime":"2026-03-12T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.488360 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.492436 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.492472 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.492481 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.492495 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.492511 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:51Z","lastTransitionTime":"2026-03-12T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.502689 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.505652 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.505679 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.505691 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.505706 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.505719 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:51Z","lastTransitionTime":"2026-03-12T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.519346 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.523797 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.523834 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.523843 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.523858 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:51 crc kubenswrapper[4778]: I0312 13:10:51.523868 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:51Z","lastTransitionTime":"2026-03-12T13:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.536886 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.537291 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.537386 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.637900 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.738800 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.839600 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:51 crc kubenswrapper[4778]: E0312 13:10:51.940601 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.040944 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.141463 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.241942 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.342647 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.357431 4778 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.443155 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.543821 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.644237 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.744613 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.845622 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:52 crc kubenswrapper[4778]: E0312 13:10:52.946334 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.047090 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.147570 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.247839 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.349002 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.450174 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.551127 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.652160 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.752492 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.853353 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:53 crc kubenswrapper[4778]: E0312 13:10:53.953747 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.053873 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.154258 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.255443 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.355810 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.456154 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.556705 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.657602 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.758392 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.859246 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:54 crc kubenswrapper[4778]: E0312 13:10:54.959935 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.060839 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.161895 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.262837 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.363565 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.464003 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.564596 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.665090 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.765526 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.866698 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:55 crc kubenswrapper[4778]: E0312 13:10:55.967300 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.067743 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.168629 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.269245 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.370359 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.471264 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.572084 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.672888 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.773819 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.874646 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:56 crc kubenswrapper[4778]: E0312 13:10:56.975416 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:57 crc kubenswrapper[4778]: E0312 13:10:57.075948 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:57 crc kubenswrapper[4778]: E0312 13:10:57.176279 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:57 crc kubenswrapper[4778]: E0312 13:10:57.276714 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:57 crc kubenswrapper[4778]: E0312 13:10:57.377361 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:57 crc kubenswrapper[4778]: E0312 13:10:57.478436 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:57 crc kubenswrapper[4778]: E0312 13:10:57.579558 4778 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.647897 4778 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.682634 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.682719 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.682745 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.682780 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.682805 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:57Z","lastTransitionTime":"2026-03-12T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.785372 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.785430 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.785451 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.785478 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.785504 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:57Z","lastTransitionTime":"2026-03-12T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.888071 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.888117 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.888127 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.888140 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.888152 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:57Z","lastTransitionTime":"2026-03-12T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.990301 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.990355 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.990370 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.990385 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:57 crc kubenswrapper[4778]: I0312 13:10:57.990397 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:57Z","lastTransitionTime":"2026-03-12T13:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.092214 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.092252 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.092260 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.092275 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.092284 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.194300 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.194334 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.194346 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.194362 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.194374 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.218079 4778 apiserver.go:52] "Watching apiserver" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.226530 4778 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.227035 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qdxm2","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j","openshift-image-registry/node-ca-4dfhs","openshift-machine-config-operator/machine-config-daemon-2qx88","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-ovn-kubernetes/ovnkube-node-8bcc9","openshift-multus/multus-fhcz6","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-multus/multus-additional-cni-plugins-rsshp","openshift-multus/network-metrics-daemon-rz9vw","openshift-network-diagnostics/network-check-target-xd92c"] Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.227637 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.227755 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.227810 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.227846 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.227883 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.227956 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.228133 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.228142 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.228432 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.228457 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.229247 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.229276 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.229246 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qdxm2" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.229337 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.229445 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.230877 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.231068 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.231227 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.231363 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.232148 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.232767 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.232892 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.233078 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.235467 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.238552 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.241067 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.241723 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.241805 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.241812 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.241807 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.243965 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244107 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244131 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244155 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244194 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244266 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244271 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244323 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244356 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244378 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244384 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244386 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244496 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244505 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244599 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244624 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244665 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244685 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244723 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244507 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244815 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244846 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244862 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.244941 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.245303 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.261320 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.276618 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.284413 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.295284 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.295924 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.295970 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.295980 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.295996 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.296005 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.302594 4778 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.305276 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.308555 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.308707 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.308812 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.309391 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.309518 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310164 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.309093 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.309318 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.309456 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.309701 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310234 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310355 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310399 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310428 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310457 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310504 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310534 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310564 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310593 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310618 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310644 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310670 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310688 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310700 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310729 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310758 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310787 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.310910 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311146 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311363 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311514 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311562 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311568 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311584 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311590 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311636 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311657 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311676 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311693 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311709 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311727 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311748 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311765 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311782 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311800 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311817 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311833 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311849 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311867 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311921 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311942 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311960 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311977 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312008 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312024 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312044 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312062 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312080 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312096 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312115 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312131 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312148 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312163 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312180 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312210 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312228 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312244 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312259 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312279 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312295 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312331 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312350 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312366 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312383 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312400 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312415 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312432 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312447 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312462 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312477 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312493 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312510 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312524 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312584 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312603 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312619 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312635 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312655 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312669 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312684 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312701 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312716 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312732 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312749 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312765 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312783 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312799 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312813 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312829 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312846 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312863 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312880 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312896 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312914 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312931 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312947 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312964 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312979 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312996 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313012 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313028 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313043 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313060 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313076 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313092 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313109 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313167 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313195 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313214 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313241 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313260 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313276 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313294 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313311 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313330 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313347 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313363 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313379 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313394 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313411 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313427 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313443 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313460 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313476 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313493 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313510 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313526 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313541 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313559 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313574 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313591 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313609 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313624 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313642 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313659 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313675 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313693 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313709 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313729 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313745 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313762 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313778 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313795 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313812 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313829 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313846 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313862 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313879 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313899 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313916 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313932 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313949 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313966 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313981 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313999 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314015 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314034 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314050 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314067 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314084 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314101 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314117 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314134 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314151 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314168 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314218 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314240 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314257 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314273 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314290 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314310 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314326 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314343 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314360 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314378 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314394 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314412 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314429 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314446 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314462 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314478 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314498 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314519 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314543 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314567 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314588 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314609 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314626 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314646 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314664 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314682 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314706 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314732 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314752 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314770 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314787 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.311720 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314854 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314892 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-cni-binary-copy\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314909 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-k8s-cni-cncf-io\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314926 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314945 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314962 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314980 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-os-release\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315001 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315018 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312052 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312250 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.312744 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313615 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313631 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313661 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313390 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.313930 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314342 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314449 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.314841 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315039 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hrx\" (UniqueName: \"kubernetes.io/projected/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-kube-api-access-w4hrx\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315385 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-conf-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315428 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315460 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-netd\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315499 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/24438fc6-dab0-4a9e-8b97-2532da76d9cd-rootfs\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315527 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cfefcab6-a931-413e-8763-0f63f17911cd-host\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.315541 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:10:58.815521159 +0000 UTC m=+77.264216555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315571 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g92p9\" (UniqueName: \"kubernetes.io/projected/de004a2f-3061-4aae-aa57-389219c71023-kube-api-access-g92p9\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315598 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdj5w\" (UniqueName: \"kubernetes.io/projected/0b59b25a-3acc-4d06-b91d-575f45463520-kube-api-access-cdj5w\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315618 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-config\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315652 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-slash\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315670 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cnibin\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315689 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315707 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-cni-bin\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315725 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-netns\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315742 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de004a2f-3061-4aae-aa57-389219c71023-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315759 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315786 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24438fc6-dab0-4a9e-8b97-2532da76d9cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315809 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de004a2f-3061-4aae-aa57-389219c71023-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315829 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-762lp\" (UniqueName: \"kubernetes.io/projected/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-kube-api-access-762lp\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315848 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-var-lib-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315866 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de004a2f-3061-4aae-aa57-389219c71023-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315894 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315911 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-os-release\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315932 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315948 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-systemd\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315964 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-node-log\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.315979 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-log-socket\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316000 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316017 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhn9v\" (UniqueName: \"kubernetes.io/projected/24438fc6-dab0-4a9e-8b97-2532da76d9cd-kube-api-access-rhn9v\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316035 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-system-cni-dir\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316053 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-etc-kubernetes\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316069 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-kubelet\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316085 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-bin\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316102 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24438fc6-dab0-4a9e-8b97-2532da76d9cd-proxy-tls\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316118 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cfefcab6-a931-413e-8763-0f63f17911cd-serviceca\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316133 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316148 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-cni-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316167 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-env-overrides\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316204 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-script-lib\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316234 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316258 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7050ddd9-aa01-4af7-9046-208f85f50a86-hosts-file\") pod \"node-resolver-qdxm2\" (UID: \"7050ddd9-aa01-4af7-9046-208f85f50a86\") " pod="openshift-dns/node-resolver-qdxm2" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316279 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-cnibin\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316296 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-socket-dir-parent\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316311 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-netns\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316331 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-cni-multus\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316353 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-daemon-config\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316377 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316401 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316430 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-ovn\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316454 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-schvw\" (UniqueName: \"kubernetes.io/projected/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-kube-api-access-schvw\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316491 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316567 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316518 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssbrl\" (UniqueName: \"kubernetes.io/projected/cfefcab6-a931-413e-8763-0f63f17911cd-kube-api-access-ssbrl\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316710 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jspwn\" (UniqueName: \"kubernetes.io/projected/7050ddd9-aa01-4af7-9046-208f85f50a86-kube-api-access-jspwn\") pod \"node-resolver-qdxm2\" (UID: \"7050ddd9-aa01-4af7-9046-208f85f50a86\") " pod="openshift-dns/node-resolver-qdxm2" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316736 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-system-cni-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316738 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316762 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-kubelet\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316787 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-hostroot\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316809 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-systemd-units\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316834 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-etc-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316838 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316857 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovn-node-metrics-cert\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316887 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316888 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.316924 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.317404 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.317898 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.317976 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318017 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318324 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318483 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318549 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318691 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318775 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318843 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318856 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318897 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.318915 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.319084 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.319167 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.319293 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.319721 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.319740 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.319756 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.320004 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.320029 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.320028 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.319788 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.320139 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.320176 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.320231 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.320277 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.320833 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.321346 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.321400 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.321550 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.322006 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.322783 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-multus-certs\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.322875 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.322971 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.323167 4778 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.323341 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:58.823274799 +0000 UTC m=+77.271970235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.323360 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.323534 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.323539 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.323723 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.323888 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.323910 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.324628 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.325945 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.326062 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.323540 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.326274 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.326346 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.326460 4778 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.326592 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:58.826574015 +0000 UTC m=+77.275269461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.326901 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328344 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328401 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328431 4778 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328610 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328651 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328669 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.329017 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328692 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328950 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.329062 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.328961 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.329091 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.329673 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.329707 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.329828 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.329873 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.330020 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.330363 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.330825 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.330960 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.331089 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.331250 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.331509 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.331735 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.331860 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.331811 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.331924 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.331929 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332261 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332437 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332513 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332467 4778 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332670 4778 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332673 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332701 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332707 4778 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332931 4778 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332952 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332963 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332974 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332985 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.332996 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333053 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333087 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333101 4778 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333112 4778 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333123 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333137 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333146 4778 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333175 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333206 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333217 4778 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333229 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333232 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333242 4778 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333256 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333290 4778 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333302 4778 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333311 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333323 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333332 4778 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333557 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.333760 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.334243 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.341694 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.343112 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.343154 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.343169 4778 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.343261 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:58.843239015 +0000 UTC m=+77.291934411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.343550 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.344201 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.346434 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.347491 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.347604 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.347679 4778 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.347738 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.347762 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.347717 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.348745 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:58.848705376 +0000 UTC m=+77.297400792 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.348832 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.354738 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.354797 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.355234 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.355252 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.355800 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.356200 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.356238 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.356350 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.356673 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.356833 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.356927 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.357025 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.357482 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.356932 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.358764 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.359567 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.359849 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.359983 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.360211 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.360217 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362525 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.360564 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.360616 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.360715 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.360888 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.360888 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.361140 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.361216 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.361285 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362212 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362642 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.361911 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.361928 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362953 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362253 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362373 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362487 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362496 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362534 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.362652 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.364465 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.366523 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.366657 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.366900 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.366910 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.367043 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.367220 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.367504 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.367521 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.368047 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.368289 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.369525 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.369879 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.369947 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.370107 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.370668 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.371000 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.371318 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.371374 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.371541 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.371626 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.371687 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.371917 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.372015 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.372093 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.372109 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.372411 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.372663 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.372816 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.372859 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373269 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373297 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373323 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373334 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373375 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373382 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373522 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373549 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373927 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.373974 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.374075 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.374329 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.374429 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.374781 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.380404 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.380699 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.386227 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.391590 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.394925 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.399691 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.399754 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.399768 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.399786 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.399800 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.404650 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.406781 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.408398 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.413248 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.421949 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.432966 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434227 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24438fc6-dab0-4a9e-8b97-2532da76d9cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434257 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de004a2f-3061-4aae-aa57-389219c71023-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434275 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-762lp\" (UniqueName: \"kubernetes.io/projected/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-kube-api-access-762lp\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434292 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-var-lib-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434307 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-systemd\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434323 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-node-log\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434339 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de004a2f-3061-4aae-aa57-389219c71023-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434362 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-os-release\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434377 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-log-socket\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434391 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-kubelet\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434404 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-bin\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434425 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhn9v\" (UniqueName: \"kubernetes.io/projected/24438fc6-dab0-4a9e-8b97-2532da76d9cd-kube-api-access-rhn9v\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434440 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-system-cni-dir\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434455 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-etc-kubernetes\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434469 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-script-lib\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434483 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24438fc6-dab0-4a9e-8b97-2532da76d9cd-proxy-tls\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434497 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cfefcab6-a931-413e-8763-0f63f17911cd-serviceca\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434513 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434527 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-cni-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434541 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-env-overrides\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434555 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7050ddd9-aa01-4af7-9046-208f85f50a86-hosts-file\") pod \"node-resolver-qdxm2\" (UID: \"7050ddd9-aa01-4af7-9046-208f85f50a86\") " pod="openshift-dns/node-resolver-qdxm2" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434568 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-cnibin\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434581 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-socket-dir-parent\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434597 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434610 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-ovn\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434624 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-schvw\" (UniqueName: \"kubernetes.io/projected/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-kube-api-access-schvw\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434639 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-netns\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434653 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-cni-multus\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434667 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-daemon-config\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434681 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-hostroot\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434695 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-systemd-units\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435294 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-etc-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434740 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-etc-kubernetes\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.434788 4778 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.435502 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs podName:0b59b25a-3acc-4d06-b91d-575f45463520 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:58.935482845 +0000 UTC m=+77.384178241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs") pod "network-metrics-daemon-rz9vw" (UID: "0b59b25a-3acc-4d06-b91d-575f45463520") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434867 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-systemd-units\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434909 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-socket-dir-parent\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435526 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-env-overrides\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434950 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-ovn\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435560 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-etc-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435169 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-netns\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435222 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-cni-multus\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434931 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435580 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-script-lib\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435564 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24438fc6-dab0-4a9e-8b97-2532da76d9cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434777 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-cnibin\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435626 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-os-release\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.434839 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-cni-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435674 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7050ddd9-aa01-4af7-9046-208f85f50a86-hosts-file\") pod \"node-resolver-qdxm2\" (UID: \"7050ddd9-aa01-4af7-9046-208f85f50a86\") " pod="openshift-dns/node-resolver-qdxm2" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435694 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssbrl\" (UniqueName: \"kubernetes.io/projected/cfefcab6-a931-413e-8763-0f63f17911cd-kube-api-access-ssbrl\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435720 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jspwn\" (UniqueName: \"kubernetes.io/projected/7050ddd9-aa01-4af7-9046-208f85f50a86-kube-api-access-jspwn\") pod \"node-resolver-qdxm2\" (UID: \"7050ddd9-aa01-4af7-9046-208f85f50a86\") " pod="openshift-dns/node-resolver-qdxm2" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435741 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-system-cni-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435761 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-kubelet\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435781 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovn-node-metrics-cert\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435802 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435827 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435848 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-multus-certs\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435857 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-daemon-config\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435878 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-cni-binary-copy\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435898 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-k8s-cni-cncf-io\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435902 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-hostroot\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435918 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435947 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435967 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-os-release\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435682 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-log-socket\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.435989 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hrx\" (UniqueName: \"kubernetes.io/projected/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-kube-api-access-w4hrx\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436016 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-conf-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436041 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436066 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-netd\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436093 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdj5w\" (UniqueName: \"kubernetes.io/projected/0b59b25a-3acc-4d06-b91d-575f45463520-kube-api-access-cdj5w\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436115 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-config\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436137 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/24438fc6-dab0-4a9e-8b97-2532da76d9cd-rootfs\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436156 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cfefcab6-a931-413e-8763-0f63f17911cd-host\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436177 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g92p9\" (UniqueName: \"kubernetes.io/projected/de004a2f-3061-4aae-aa57-389219c71023-kube-api-access-g92p9\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436224 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-slash\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436246 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cnibin\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436272 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-systemd\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436315 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-bin\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436317 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-node-log\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436358 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-system-cni-dir\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436359 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-kubelet\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436089 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-var-lib-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436411 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-os-release\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436451 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436506 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-netd\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436517 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-multus-conf-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436620 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cfefcab6-a931-413e-8763-0f63f17911cd-host\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436656 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/24438fc6-dab0-4a9e-8b97-2532da76d9cd-rootfs\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436687 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-slash\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436688 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-multus-certs\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436735 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-kubelet\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436796 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-system-cni-dir\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436852 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436268 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436989 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/de004a2f-3061-4aae-aa57-389219c71023-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.436974 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-cni-bin\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437050 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-netns\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437052 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-run-k8s-cni-cncf-io\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437070 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de004a2f-3061-4aae-aa57-389219c71023-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437096 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437168 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437202 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437218 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437233 4778 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437248 4778 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437260 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437272 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437273 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-openvswitch\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437285 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437317 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437322 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437337 4778 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437348 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cnibin\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437351 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437372 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437385 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437396 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437408 4778 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437421 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437433 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437446 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437458 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437470 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437443 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-host-var-lib-cni-bin\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437500 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-netns\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437482 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437621 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437626 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437714 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-cni-binary-copy\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437636 4778 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437574 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-config\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437767 4778 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437790 4778 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437805 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437821 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.437836 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.438105 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cfefcab6-a931-413e-8763-0f63f17911cd-serviceca\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.438139 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/de004a2f-3061-4aae-aa57-389219c71023-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.439067 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440243 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-cni-binary-copy\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440738 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440767 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440781 4778 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440800 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440812 4778 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440898 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440914 4778 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440927 4778 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440944 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.440965 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.441238 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442489 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442510 4778 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442529 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442543 4778 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442558 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442571 4778 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442589 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442653 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442664 4778 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442677 4778 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442689 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442700 4778 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442709 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442719 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442729 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442738 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442747 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442756 4778 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442767 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442777 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442787 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442797 4778 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442806 4778 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442815 4778 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442824 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442833 4778 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442842 4778 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442851 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442860 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442868 4778 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442877 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442887 4778 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442895 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442904 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442912 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442922 4778 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442931 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442941 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442950 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442959 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442967 4778 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442976 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442985 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.442993 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443002 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443011 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443020 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443030 4778 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443038 4778 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443048 4778 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443056 4778 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443064 4778 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443072 4778 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443082 4778 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443090 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443098 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443110 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443119 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443129 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443138 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443147 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443155 4778 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443163 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443172 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443201 4778 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443213 4778 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443222 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443230 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443240 4778 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443249 4778 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443257 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443266 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443275 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443283 4778 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443292 4778 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443327 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443337 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443345 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443354 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443363 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443373 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443381 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443390 4778 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443400 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443409 4778 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443418 4778 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443427 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443436 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443445 4778 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443453 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443463 4778 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443476 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443484 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443494 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443504 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443513 4778 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443523 4778 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443531 4778 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443545 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443553 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443561 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443570 4778 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443579 4778 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443587 4778 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443595 4778 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443605 4778 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443613 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443622 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443632 4778 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443639 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443650 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443658 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443667 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443675 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443684 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443692 4778 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443701 4778 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443709 4778 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443718 4778 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443726 4778 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443735 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443743 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443750 4778 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443761 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443769 4778 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443777 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443786 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443795 4778 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.443803 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.444171 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovn-node-metrics-cert\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.447580 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24438fc6-dab0-4a9e-8b97-2532da76d9cd-proxy-tls\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.452578 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/de004a2f-3061-4aae-aa57-389219c71023-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.454591 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssbrl\" (UniqueName: \"kubernetes.io/projected/cfefcab6-a931-413e-8763-0f63f17911cd-kube-api-access-ssbrl\") pod \"node-ca-4dfhs\" (UID: \"cfefcab6-a931-413e-8763-0f63f17911cd\") " pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.455248 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-schvw\" (UniqueName: \"kubernetes.io/projected/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-kube-api-access-schvw\") pod \"ovnkube-node-8bcc9\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.455990 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hrx\" (UniqueName: \"kubernetes.io/projected/8a1f8eaa-ac07-4478-be5d-0742de6b43c4-kube-api-access-w4hrx\") pod \"multus-additional-cni-plugins-rsshp\" (UID: \"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\") " pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.456164 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdj5w\" (UniqueName: \"kubernetes.io/projected/0b59b25a-3acc-4d06-b91d-575f45463520-kube-api-access-cdj5w\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.456415 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-762lp\" (UniqueName: \"kubernetes.io/projected/1e7037a8-a966-4df0-9f94-fe2dd3e2de6e-kube-api-access-762lp\") pod \"multus-fhcz6\" (UID: \"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\") " pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.456942 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhn9v\" (UniqueName: \"kubernetes.io/projected/24438fc6-dab0-4a9e-8b97-2532da76d9cd-kube-api-access-rhn9v\") pod \"machine-config-daemon-2qx88\" (UID: \"24438fc6-dab0-4a9e-8b97-2532da76d9cd\") " pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.458126 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jspwn\" (UniqueName: \"kubernetes.io/projected/7050ddd9-aa01-4af7-9046-208f85f50a86-kube-api-access-jspwn\") pod \"node-resolver-qdxm2\" (UID: \"7050ddd9-aa01-4af7-9046-208f85f50a86\") " pod="openshift-dns/node-resolver-qdxm2" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.459218 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g92p9\" (UniqueName: \"kubernetes.io/projected/de004a2f-3061-4aae-aa57-389219c71023-kube-api-access-g92p9\") pod \"ovnkube-control-plane-749d76644c-sww7j\" (UID: \"de004a2f-3061-4aae-aa57-389219c71023\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.502157 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.502224 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.502240 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.502261 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.502275 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.550325 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.556568 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.567411 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 12 13:10:58 crc kubenswrapper[4778]: W0312 13:10:58.569014 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1eda1cd4295e4701797cceb7174e91b8d6499a94d1e66ba504e25839618a7acf WatchSource:0}: Error finding container 1eda1cd4295e4701797cceb7174e91b8d6499a94d1e66ba504e25839618a7acf: Status 404 returned error can't find the container with id 1eda1cd4295e4701797cceb7174e91b8d6499a94d1e66ba504e25839618a7acf Mar 12 13:10:58 crc kubenswrapper[4778]: W0312 13:10:58.570170 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24438fc6_dab0_4a9e_8b97_2532da76d9cd.slice/crio-9fb8cc13c65fc315644001720a3ed3b49ecd4b57157eaf28de06428d98a7432e WatchSource:0}: Error finding container 9fb8cc13c65fc315644001720a3ed3b49ecd4b57157eaf28de06428d98a7432e: Status 404 returned error can't find the container with id 9fb8cc13c65fc315644001720a3ed3b49ecd4b57157eaf28de06428d98a7432e Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.575618 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.584950 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:10:58 crc kubenswrapper[4778]: W0312 13:10:58.587472 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-1c02416320777aa48a416466c459c00560f0e4d96499a38198a553fa887aee3a WatchSource:0}: Error finding container 1c02416320777aa48a416466c459c00560f0e4d96499a38198a553fa887aee3a: Status 404 returned error can't find the container with id 1c02416320777aa48a416466c459c00560f0e4d96499a38198a553fa887aee3a Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.593360 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fhcz6" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.599419 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.604853 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qdxm2" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.605935 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.605972 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.605994 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.606020 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.606041 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: W0312 13:10:58.605926 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-4e8f021e062407652b69c8ca2911a1a25e12ea0074598fd72d17a5661a7e9d4d WatchSource:0}: Error finding container 4e8f021e062407652b69c8ca2911a1a25e12ea0074598fd72d17a5661a7e9d4d: Status 404 returned error can't find the container with id 4e8f021e062407652b69c8ca2911a1a25e12ea0074598fd72d17a5661a7e9d4d Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.614859 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4dfhs" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.617760 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rsshp" Mar 12 13:10:58 crc kubenswrapper[4778]: W0312 13:10:58.631291 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65cd795e_eb6e_4995_a4c1_9dea6f425ac5.slice/crio-591e87d9e47004fc9c6fc7b24484cec488177d8e0820b4787eb9618d9e5051df WatchSource:0}: Error finding container 591e87d9e47004fc9c6fc7b24484cec488177d8e0820b4787eb9618d9e5051df: Status 404 returned error can't find the container with id 591e87d9e47004fc9c6fc7b24484cec488177d8e0820b4787eb9618d9e5051df Mar 12 13:10:58 crc kubenswrapper[4778]: W0312 13:10:58.662220 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e7037a8_a966_4df0_9f94_fe2dd3e2de6e.slice/crio-99cbed59a9719cac7008be58a65d409090b8ab2da26045a2ec67cfe3d360061a WatchSource:0}: Error finding container 99cbed59a9719cac7008be58a65d409090b8ab2da26045a2ec67cfe3d360061a: Status 404 returned error can't find the container with id 99cbed59a9719cac7008be58a65d409090b8ab2da26045a2ec67cfe3d360061a Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.710774 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.710818 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.710832 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.710849 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.710863 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.813801 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.813831 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.813840 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.813854 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.813863 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.849141 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.849279 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849322 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:10:59.849289384 +0000 UTC m=+78.297984790 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849360 4778 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.849368 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849419 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:59.849403667 +0000 UTC m=+78.298099123 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.849461 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.849493 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849562 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849579 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849591 4778 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849596 4778 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849600 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849652 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849663 4778 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849630 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:59.849619353 +0000 UTC m=+78.298314989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849708 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:59.849695985 +0000 UTC m=+78.298391381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.849721 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:59.849713895 +0000 UTC m=+78.298409291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.917261 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.917310 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.917318 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.917333 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.917913 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:58Z","lastTransitionTime":"2026-03-12T13:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.950207 4778 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:10:58 crc kubenswrapper[4778]: I0312 13:10:58.950963 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:10:58 crc kubenswrapper[4778]: E0312 13:10:58.951030 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs podName:0b59b25a-3acc-4d06-b91d-575f45463520 nodeName:}" failed. No retries permitted until 2026-03-12 13:10:59.951012749 +0000 UTC m=+78.399708145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs") pod "network-metrics-daemon-rz9vw" (UID: "0b59b25a-3acc-4d06-b91d-575f45463520") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.022720 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.022874 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.022954 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.023041 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.023112 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.125805 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.125842 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.125850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.125864 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.125875 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.239607 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.239639 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.239650 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.239666 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.239681 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.342349 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.342414 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.342433 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.342454 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.342465 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.445028 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.445061 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.445071 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.445083 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.445398 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.548394 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.548468 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.548486 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.548882 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.549120 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.566113 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1c02416320777aa48a416466c459c00560f0e4d96499a38198a553fa887aee3a"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.568229 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qdxm2" event={"ID":"7050ddd9-aa01-4af7-9046-208f85f50a86","Type":"ContainerStarted","Data":"9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.568266 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qdxm2" event={"ID":"7050ddd9-aa01-4af7-9046-208f85f50a86","Type":"ContainerStarted","Data":"ebdcb6fb90e54db80d802a165c170454f2513f546e2fcead9989de5a3a3734f4"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.570670 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fhcz6" event={"ID":"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e","Type":"ContainerStarted","Data":"5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.570727 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fhcz6" event={"ID":"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e","Type":"ContainerStarted","Data":"99cbed59a9719cac7008be58a65d409090b8ab2da26045a2ec67cfe3d360061a"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.572689 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.572737 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4e8f021e062407652b69c8ca2911a1a25e12ea0074598fd72d17a5661a7e9d4d"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.574484 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" event={"ID":"de004a2f-3061-4aae-aa57-389219c71023","Type":"ContainerStarted","Data":"d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.574515 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" event={"ID":"de004a2f-3061-4aae-aa57-389219c71023","Type":"ContainerStarted","Data":"478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.574528 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" event={"ID":"de004a2f-3061-4aae-aa57-389219c71023","Type":"ContainerStarted","Data":"96a2eabb09f2f7cd28d7f361a5a9eae956e1a74ad6b85c92f5ec73e7a303d94e"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.575919 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e" exitCode=0 Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.576007 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.576040 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"591e87d9e47004fc9c6fc7b24484cec488177d8e0820b4787eb9618d9e5051df"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.578998 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.579047 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.579063 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1eda1cd4295e4701797cceb7174e91b8d6499a94d1e66ba504e25839618a7acf"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.582879 4778 generic.go:334] "Generic (PLEG): container finished" podID="8a1f8eaa-ac07-4478-be5d-0742de6b43c4" containerID="f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057" exitCode=0 Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.583022 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerDied","Data":"f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.583070 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerStarted","Data":"d9ed48aead91feac56fb2c82c53067915d8ab5047b52c7c84d4c0fa72f58c6ae"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.586824 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4dfhs" event={"ID":"cfefcab6-a931-413e-8763-0f63f17911cd","Type":"ContainerStarted","Data":"eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.587261 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4dfhs" event={"ID":"cfefcab6-a931-413e-8763-0f63f17911cd","Type":"ContainerStarted","Data":"7b6dfac531975a0fc0d0d91f8f34d5737bae2604221aa20505e2c947f0e29ecd"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.588298 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.588806 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.588892 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.588914 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"9fb8cc13c65fc315644001720a3ed3b49ecd4b57157eaf28de06428d98a7432e"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.606835 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.619515 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.633062 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.644371 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.653536 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.653561 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.653571 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.653585 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.653594 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.655160 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.671470 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.686525 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.698540 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.710204 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.720899 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.738305 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.747764 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.757938 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.757961 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.757987 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.758001 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.758013 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.761639 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.780356 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.790818 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.803717 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.815441 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.830507 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.844853 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.853730 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.860001 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.860036 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.860049 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.860065 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.860078 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.862511 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862680 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:11:01.862652486 +0000 UTC m=+80.311347882 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.862681 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.862762 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862776 4778 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.862816 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862837 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:01.86282103 +0000 UTC m=+80.311516436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.862860 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862881 4778 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862917 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:01.862908463 +0000 UTC m=+80.311603859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862972 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862986 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862989 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.862999 4778 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.863004 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.863017 4778 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.863030 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:01.863020026 +0000 UTC m=+80.311715522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.863045 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:01.863037066 +0000 UTC m=+80.311732552 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.865944 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.877242 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.887951 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.898243 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.910297 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.922004 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.935508 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:59Z is after 2025-08-24T17:21:41Z" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.962144 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.962220 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.962241 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.962264 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.962279 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:10:59Z","lastTransitionTime":"2026-03-12T13:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:10:59 crc kubenswrapper[4778]: I0312 13:10:59.963440 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.963538 4778 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:10:59 crc kubenswrapper[4778]: E0312 13:10:59.963584 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs podName:0b59b25a-3acc-4d06-b91d-575f45463520 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:01.963572321 +0000 UTC m=+80.412267717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs") pod "network-metrics-daemon-rz9vw" (UID: "0b59b25a-3acc-4d06-b91d-575f45463520") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.065128 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.065223 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.065248 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.065276 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.065297 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.168136 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.168204 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.168214 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.168228 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.168237 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.252955 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.253106 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.253102 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:00 crc kubenswrapper[4778]: E0312 13:11:00.253354 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.253510 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:00 crc kubenswrapper[4778]: E0312 13:11:00.253690 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:00 crc kubenswrapper[4778]: E0312 13:11:00.253840 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:00 crc kubenswrapper[4778]: E0312 13:11:00.254001 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.256936 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.257622 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.258848 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.259491 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.260491 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.261063 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.261681 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.262608 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.263283 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.264142 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.264654 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.265661 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.266137 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.266650 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.267565 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.268055 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.268958 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.269384 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.269936 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.271918 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.272174 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.272311 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.272334 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.272364 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.272388 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.273481 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.275073 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.276363 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.278305 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.279699 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.282176 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.283655 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.284475 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.285426 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.286207 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.286939 4778 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.287098 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.289079 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.289880 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.290638 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.293358 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.294536 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.296078 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.297039 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.298441 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.299491 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.300613 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.301740 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.302826 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.303685 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.305235 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.306046 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.307041 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.308210 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.310118 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.310856 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.312258 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.312918 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.314008 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.321730 4778 scope.go:117] "RemoveContainer" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" Mar 12 13:11:00 crc kubenswrapper[4778]: E0312 13:11:00.321941 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.323416 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.374958 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.375017 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.375035 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.375061 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.375079 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.477731 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.477975 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.478124 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.478257 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.478352 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.580923 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.581388 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.581487 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.581616 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.581740 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.596278 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.596314 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.596324 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.596333 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.597930 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerStarted","Data":"deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.598807 4778 scope.go:117] "RemoveContainer" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" Mar 12 13:11:00 crc kubenswrapper[4778]: E0312 13:11:00.599055 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.610773 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.625593 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.641439 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.658717 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.670995 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.683701 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.683742 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.683753 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.683769 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.683787 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.684887 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.695287 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.706563 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.727162 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.744894 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.772792 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.786174 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.786414 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.786481 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.786553 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.786630 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.787634 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.800709 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.825121 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.842856 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:00Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.890387 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.890427 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.890440 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.890459 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.890472 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.992717 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.992761 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.992771 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.992786 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:00 crc kubenswrapper[4778]: I0312 13:11:00.992797 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:00Z","lastTransitionTime":"2026-03-12T13:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.094702 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.094733 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.094749 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.094767 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.094778 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.196778 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.197049 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.197060 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.197097 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.197107 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.299755 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.300009 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.300104 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.300201 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.300270 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.403766 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.404003 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.404088 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.404163 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.404269 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.507522 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.507563 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.507577 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.507596 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.507608 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.603384 4778 generic.go:334] "Generic (PLEG): container finished" podID="8a1f8eaa-ac07-4478-be5d-0742de6b43c4" containerID="deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f" exitCode=0 Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.603505 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerDied","Data":"deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.608004 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.608042 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.608959 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.609000 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.609011 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.609030 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.609045 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.621378 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.635934 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.649089 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.660813 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.674112 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.688332 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.698492 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.711432 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.711485 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.711495 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.711509 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.711521 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.712509 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.731629 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.744077 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.757891 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.771260 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.778681 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.778711 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.778723 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.778739 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.778747 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.788680 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.789252 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.792795 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.792823 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.792834 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.792849 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.792860 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.801299 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.808824 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.810382 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.811946 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.811973 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.811984 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.812000 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.812011 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.822261 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.825929 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.825988 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.826003 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.826022 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.826035 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.836595 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.840663 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.840805 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.840888 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.840969 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.841030 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.855978 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:01Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.856103 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.857593 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.857633 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.857644 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.857656 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.857665 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.898171 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898340 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:11:05.89830846 +0000 UTC m=+84.347003866 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.898403 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.898521 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898580 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.898570 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898609 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898621 4778 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.898652 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898667 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:05.898653479 +0000 UTC m=+84.347348875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898731 4778 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898767 4778 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898813 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:05.898795302 +0000 UTC m=+84.347490698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898836 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:05.898826993 +0000 UTC m=+84.347522479 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.898928 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.899006 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.899025 4778 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.899121 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:05.89909594 +0000 UTC m=+84.347791336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.959760 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.959814 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.959829 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.959850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.959864 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:01Z","lastTransitionTime":"2026-03-12T13:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:01 crc kubenswrapper[4778]: I0312 13:11:01.999340 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.999537 4778 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:01 crc kubenswrapper[4778]: E0312 13:11:01.999619 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs podName:0b59b25a-3acc-4d06-b91d-575f45463520 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:05.999600544 +0000 UTC m=+84.448295940 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs") pod "network-metrics-daemon-rz9vw" (UID: "0b59b25a-3acc-4d06-b91d-575f45463520") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.063573 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.063609 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.063621 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.063639 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.063653 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.165705 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.165742 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.165753 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.165769 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.165780 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.253469 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.253489 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:02 crc kubenswrapper[4778]: E0312 13:11:02.253726 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.253761 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.253747 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:02 crc kubenswrapper[4778]: E0312 13:11:02.253898 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:02 crc kubenswrapper[4778]: E0312 13:11:02.254045 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:02 crc kubenswrapper[4778]: E0312 13:11:02.254250 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.268409 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.268479 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.268506 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.268538 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.268558 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.274468 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.290782 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.308654 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.327493 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.341908 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.367035 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.371052 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.371115 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.371133 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.371158 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.371175 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.393897 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.423608 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.435345 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.448324 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.458078 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.470079 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.473950 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.474138 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.474277 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.474391 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.474485 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.486637 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.501274 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.514980 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.577287 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.577334 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.577344 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.577365 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.577375 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.614998 4778 generic.go:334] "Generic (PLEG): container finished" podID="8a1f8eaa-ac07-4478-be5d-0742de6b43c4" containerID="2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028" exitCode=0 Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.615110 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerDied","Data":"2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.617158 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.634129 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.651873 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.668620 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.680534 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.680570 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.680581 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.680595 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.680606 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.685419 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.705390 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.718507 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.733084 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.749417 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.759974 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.774303 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.782666 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.782692 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.782700 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.782718 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.782729 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.790261 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.802086 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.813368 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.823804 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.838057 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.849900 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.860211 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.876644 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.886393 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.886433 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.886441 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.886455 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.886465 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.889955 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.901808 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.920014 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.939687 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.962275 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.988765 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.988805 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.988814 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.988826 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.988835 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:02Z","lastTransitionTime":"2026-03-12T13:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:02 crc kubenswrapper[4778]: I0312 13:11:02.990021 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:02.999963 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.015207 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.030019 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.047506 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.060364 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.072108 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.091308 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.091344 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.091352 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.091364 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.091373 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.193294 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.193347 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.193364 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.193387 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.193402 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.296086 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.296140 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.296157 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.296179 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.296237 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.398883 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.398936 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.398954 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.398977 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.398995 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.501362 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.501431 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.501455 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.501487 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.501508 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.603893 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.603935 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.603950 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.603970 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.603987 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.622395 4778 generic.go:334] "Generic (PLEG): container finished" podID="8a1f8eaa-ac07-4478-be5d-0742de6b43c4" containerID="3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d" exitCode=0 Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.622439 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerDied","Data":"3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.627456 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.641600 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.656260 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.671284 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.683425 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.697636 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.707472 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.707516 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.707533 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.707557 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.707573 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.709446 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.719354 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.736328 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.749432 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.760003 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.771419 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.783590 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.799589 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.809297 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.811691 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.811724 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.811735 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.811750 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.811763 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.826275 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.914722 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.914767 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.914781 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.914797 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:03 crc kubenswrapper[4778]: I0312 13:11:03.914809 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:03Z","lastTransitionTime":"2026-03-12T13:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.017871 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.017923 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.017946 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.017970 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.017987 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.122396 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.122468 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.122489 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.122512 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.122533 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.225620 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.225697 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.225719 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.225749 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.225772 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.253650 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.253724 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.253893 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:04 crc kubenswrapper[4778]: E0312 13:11:04.253885 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.253936 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:04 crc kubenswrapper[4778]: E0312 13:11:04.254338 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:04 crc kubenswrapper[4778]: E0312 13:11:04.254527 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:04 crc kubenswrapper[4778]: E0312 13:11:04.254682 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.269269 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.329083 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.329132 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.329149 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.329174 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.329220 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.431727 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.431846 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.431867 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.431891 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.431909 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.535097 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.535179 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.535253 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.535297 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.535349 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.634220 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerStarted","Data":"253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.638468 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.638529 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.638548 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.638574 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.638592 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.659915 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.686367 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.700440 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.717978 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.733276 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.740569 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.740620 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.740639 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.740660 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.740675 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.747595 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.760883 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.770745 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.783087 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.795923 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.810904 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.829274 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.842255 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.843374 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.843410 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.843424 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.843441 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.843453 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.856612 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.871802 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.884616 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:04Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.946240 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.946326 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.946344 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.946399 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:04 crc kubenswrapper[4778]: I0312 13:11:04.946417 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:04Z","lastTransitionTime":"2026-03-12T13:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.048779 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.048824 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.048834 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.048851 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.048860 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.151306 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.151355 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.151368 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.151385 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.151396 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.254024 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.254095 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.254123 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.254150 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.254175 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.357606 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.358034 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.358059 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.358081 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.358092 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.460618 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.460684 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.460703 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.460728 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.460748 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.563505 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.563545 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.563556 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.563574 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.563587 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.644324 4778 generic.go:334] "Generic (PLEG): container finished" podID="8a1f8eaa-ac07-4478-be5d-0742de6b43c4" containerID="253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1" exitCode=0 Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.644412 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerDied","Data":"253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.666366 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.666440 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.666462 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.666494 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.666518 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.667426 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.687766 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.703667 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.719947 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.735221 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.753597 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.766336 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.769537 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.769567 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.769579 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.769594 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.769608 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.780776 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.793737 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.806418 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.819461 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.831934 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.848088 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.858406 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.871904 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.871947 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.871961 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.871910 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.871982 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.872139 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.890073 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:05Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.942090 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.942221 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.942244 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942355 4778 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942416 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:13.942399475 +0000 UTC m=+92.391094861 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942493 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942526 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942540 4778 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942574 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:13.942563049 +0000 UTC m=+92.391258445 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942617 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:11:13.9426063 +0000 UTC m=+92.391301696 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.942775 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.942800 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942882 4778 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942916 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:13.942906188 +0000 UTC m=+92.391601584 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.942989 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.943002 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.943009 4778 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:05 crc kubenswrapper[4778]: E0312 13:11:05.943042 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:13.943032071 +0000 UTC m=+92.391727467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.974014 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.974040 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.974047 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.974060 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:05 crc kubenswrapper[4778]: I0312 13:11:05.974069 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:05Z","lastTransitionTime":"2026-03-12T13:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.043960 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:06 crc kubenswrapper[4778]: E0312 13:11:06.044093 4778 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:06 crc kubenswrapper[4778]: E0312 13:11:06.044150 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs podName:0b59b25a-3acc-4d06-b91d-575f45463520 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:14.04413152 +0000 UTC m=+92.492826916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs") pod "network-metrics-daemon-rz9vw" (UID: "0b59b25a-3acc-4d06-b91d-575f45463520") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.076562 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.076586 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.076594 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.076607 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.076617 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.179568 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.179704 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.179724 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.179746 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.179762 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.252873 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.252907 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.252935 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:06 crc kubenswrapper[4778]: E0312 13:11:06.252987 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:06 crc kubenswrapper[4778]: E0312 13:11:06.253120 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.253156 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:06 crc kubenswrapper[4778]: E0312 13:11:06.253263 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:06 crc kubenswrapper[4778]: E0312 13:11:06.253355 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.282099 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.282143 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.282155 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.282171 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.282197 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.384205 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.384241 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.384249 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.384262 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.384271 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.487061 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.487108 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.487121 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.487137 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.487148 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.589985 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.590452 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.590466 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.590489 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.590506 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.662627 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.662911 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.662970 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.662984 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.666746 4778 generic.go:334] "Generic (PLEG): container finished" podID="8a1f8eaa-ac07-4478-be5d-0742de6b43c4" containerID="1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb" exitCode=0 Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.666788 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerDied","Data":"1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.683268 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.688072 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.694971 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.695008 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.695018 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.695031 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.695041 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.695486 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.702633 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.713333 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.726944 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.738539 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.750805 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.765622 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.779256 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.795883 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.796940 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.796975 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.796988 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.797003 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.797013 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.807492 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.816503 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.828549 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.841630 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.851335 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.862822 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.874276 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.886206 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.897893 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.899775 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.899804 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.899816 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.899834 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.899844 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:06Z","lastTransitionTime":"2026-03-12T13:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.908698 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.920075 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.933383 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.943071 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.952581 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.961320 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.969934 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.981959 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:06 crc kubenswrapper[4778]: I0312 13:11:06.993017 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:06Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.001620 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.001652 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.001662 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.001675 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.001683 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.009827 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.022601 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.036628 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.047600 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.081331 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.104598 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.104625 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.104634 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.104649 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.104658 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.208585 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.208663 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.208686 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.208711 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.208728 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.311348 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.311401 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.311417 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.311437 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.311450 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.413782 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.413819 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.413832 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.413848 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.413860 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.515810 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.515840 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.515850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.515863 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.515871 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.625537 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.625597 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.625607 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.625628 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.625640 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.663399 4778 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.675667 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" event={"ID":"8a1f8eaa-ac07-4478-be5d-0742de6b43c4","Type":"ContainerStarted","Data":"f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.699872 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.718264 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.728141 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.728201 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.728226 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.728242 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.728253 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.738334 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.758296 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.777494 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.791828 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.810661 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.821947 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.830662 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.830714 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.830733 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.830786 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.830809 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.836745 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.848857 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.861549 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.873669 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.884387 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.899406 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.920748 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.933090 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.933122 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.933129 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.933142 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.933152 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:07Z","lastTransitionTime":"2026-03-12T13:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:07 crc kubenswrapper[4778]: I0312 13:11:07.936358 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:07Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.035386 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.035441 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.035452 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.035467 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.035476 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.138421 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.138466 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.138480 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.138499 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.138512 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.241149 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.241196 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.241206 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.241218 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.241227 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.253552 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.253552 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.253605 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:08 crc kubenswrapper[4778]: E0312 13:11:08.253710 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.253751 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:08 crc kubenswrapper[4778]: E0312 13:11:08.253842 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:08 crc kubenswrapper[4778]: E0312 13:11:08.253935 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:08 crc kubenswrapper[4778]: E0312 13:11:08.254040 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.343383 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.343419 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.343428 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.343466 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.343482 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.445954 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.445980 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.445987 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.446000 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.446008 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.548812 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.548863 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.548878 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.548899 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.548917 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.651196 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.651237 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.651249 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.651268 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.651279 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.682475 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/0.log" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.687079 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e" exitCode=1 Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.687146 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.688517 4778 scope.go:117] "RemoveContainer" containerID="a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.706402 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.726021 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.752034 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.754314 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.754360 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.754376 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.754398 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.754413 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.767381 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.783498 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.794531 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.805448 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.817540 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.829575 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.847826 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.856028 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.856057 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.856067 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.856080 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.856089 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.862551 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.876386 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.894291 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.906743 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.931476 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:08Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0312 13:11:08.298033 6399 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 13:11:08.299002 6399 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 13:11:08.299030 6399 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0312 13:11:08.299060 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 13:11:08.299080 6399 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0312 13:11:08.299101 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0312 13:11:08.299110 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0312 13:11:08.299130 6399 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0312 13:11:08.299148 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0312 13:11:08.299157 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0312 13:11:08.299161 6399 factory.go:656] Stopping watch factory\\\\nI0312 13:11:08.299172 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0312 13:11:08.299205 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0312 13:11:08.299247 6399 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.945533 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:08Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.959314 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.959392 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.959413 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.959440 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:08 crc kubenswrapper[4778]: I0312 13:11:08.959457 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:08Z","lastTransitionTime":"2026-03-12T13:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.062426 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.062468 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.062480 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.062500 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.062514 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.165400 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.165464 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.165477 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.165494 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.165506 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.281288 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.281325 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.281333 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.281346 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.281355 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.384459 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.384504 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.384515 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.384533 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.384545 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.486057 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.486107 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.486118 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.486133 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.486143 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.587994 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.588045 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.588053 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.588067 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.588076 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.690169 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.690219 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.690232 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.690249 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.690262 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.692295 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/1.log" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.693158 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/0.log" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.696744 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22" exitCode=1 Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.696863 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.696922 4778 scope.go:117] "RemoveContainer" containerID="a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.697404 4778 scope.go:117] "RemoveContainer" containerID="3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22" Mar 12 13:11:09 crc kubenswrapper[4778]: E0312 13:11:09.697629 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.717608 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.735811 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.754713 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5522a1e716ec4cabcf4776dde35c17f9f0a89250cd85474f5a7f94fe8943b1e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:08Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0312 13:11:08.298033 6399 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0312 13:11:08.299002 6399 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0312 13:11:08.299030 6399 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0312 13:11:08.299060 6399 handler.go:208] Removed *v1.Node event handler 2\\\\nI0312 13:11:08.299080 6399 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0312 13:11:08.299101 6399 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0312 13:11:08.299110 6399 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0312 13:11:08.299130 6399 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0312 13:11:08.299148 6399 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0312 13:11:08.299157 6399 handler.go:208] Removed *v1.Node event handler 7\\\\nI0312 13:11:08.299161 6399 factory.go:656] Stopping watch factory\\\\nI0312 13:11:08.299172 6399 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0312 13:11:08.299205 6399 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0312 13:11:08.299247 6399 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:09Z\\\",\\\"message\\\":\\\"nil\\\\u003e UUID: UUIDName:}]\\\\nI0312 13:11:09.567525 6707 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0312 13:11:09.567216 6707 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:09.567573 6707 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Lo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.765248 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.775014 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.785425 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.792519 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.792551 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.792564 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.792579 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.792590 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.798351 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.813002 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.826500 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.840022 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.849439 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.863162 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.873387 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.885503 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.894102 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.894126 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.894134 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.894148 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.894157 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.896747 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.907775 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.996437 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.996498 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.996517 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.996556 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:09 crc kubenswrapper[4778]: I0312 13:11:09.996592 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:09Z","lastTransitionTime":"2026-03-12T13:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.099662 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.099734 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.099757 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.099785 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.099808 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.201856 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.201883 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.201890 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.201903 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.201912 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.253155 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.253214 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:10 crc kubenswrapper[4778]: E0312 13:11:10.253331 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.253355 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:10 crc kubenswrapper[4778]: E0312 13:11:10.253479 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:10 crc kubenswrapper[4778]: E0312 13:11:10.253578 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.253793 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:10 crc kubenswrapper[4778]: E0312 13:11:10.253912 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.304028 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.304075 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.304086 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.304107 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.304119 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.407272 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.407314 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.407322 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.407350 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.407360 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.510108 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.510170 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.510202 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.510225 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.510240 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.612578 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.612619 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.612631 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.612648 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.612660 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.703350 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/1.log" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.708796 4778 scope.go:117] "RemoveContainer" containerID="3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22" Mar 12 13:11:10 crc kubenswrapper[4778]: E0312 13:11:10.709062 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.714727 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.714797 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.714811 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.714835 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.714850 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.723353 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.740047 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.784836 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.818152 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.818225 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.818241 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.818264 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.818280 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.820758 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.845376 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.856805 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.867035 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.878502 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.888272 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.911813 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.920815 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.920853 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.920861 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.920875 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.920885 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:10Z","lastTransitionTime":"2026-03-12T13:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.934288 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.946135 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.967223 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:10 crc kubenswrapper[4778]: I0312 13:11:10.988686 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:10Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.014219 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:09Z\\\",\\\"message\\\":\\\"nil\\\\u003e UUID: UUIDName:}]\\\\nI0312 13:11:09.567525 6707 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0312 13:11:09.567216 6707 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:09.567573 6707 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Lo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:11Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.024010 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.024045 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.024055 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.024072 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.024083 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.028367 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:11Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.127927 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.127969 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.127979 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.127995 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.128006 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.230122 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.230172 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.230200 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.230218 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.230230 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.333073 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.333143 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.333161 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.333208 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.333225 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.435963 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.436031 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.436057 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.436088 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.436113 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.538971 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.539067 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.539082 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.539113 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.539129 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.642079 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.642119 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.642130 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.642145 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.642158 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.744380 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.744432 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.744442 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.744455 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.744465 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.776739 4778 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.846168 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.846225 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.846237 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.846277 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.846287 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.948571 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.948624 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.948638 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.948658 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:11 crc kubenswrapper[4778]: I0312 13:11:11.948673 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:11Z","lastTransitionTime":"2026-03-12T13:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.051452 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.051515 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.051528 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.051544 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.051555 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.154251 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.154313 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.154331 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.154354 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.154375 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.246771 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.246837 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.246850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.246865 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.246877 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.256672 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.256786 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.256800 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.256891 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.256977 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.257001 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.257239 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.257220 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.267273 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.278127 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.278372 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.278414 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.278430 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.278452 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.278614 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.293378 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.298950 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.302876 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.302942 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.302962 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.302989 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.303006 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.325579 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.326006 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.330713 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.330772 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.330795 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.330825 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.330862 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.349530 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.350274 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.354223 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.354274 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.354293 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.354316 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.354333 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.367972 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.368093 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: E0312 13:11:12.368271 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.370626 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.370693 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.370705 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.370722 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.370735 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.385554 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.396838 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.409992 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.421753 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.435687 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.446544 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.457043 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.471846 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.473610 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.473658 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.473669 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.473684 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.473697 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.485230 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.504019 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:09Z\\\",\\\"message\\\":\\\"nil\\\\u003e UUID: UUIDName:}]\\\\nI0312 13:11:09.567525 6707 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0312 13:11:09.567216 6707 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:09.567573 6707 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Lo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.513668 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:12Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.576817 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.576891 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.576908 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.576932 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.576952 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.680020 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.680083 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.680101 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.680125 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.680144 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.782454 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.782492 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.782500 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.782513 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.782522 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.885862 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.885918 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.885931 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.885951 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.885964 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.989087 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.989148 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.989162 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.989208 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:12 crc kubenswrapper[4778]: I0312 13:11:12.989234 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:12Z","lastTransitionTime":"2026-03-12T13:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.091850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.091903 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.091918 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.091938 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.091951 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.195205 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.195238 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.195245 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.195257 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.195265 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.297322 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.297384 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.297394 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.297409 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.297420 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.400211 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.400244 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.400255 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.400272 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.400285 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.503298 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.503346 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.503369 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.503393 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.503408 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.606362 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.606418 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.606435 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.606457 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.606472 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.709689 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.709741 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.709752 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.709767 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.709778 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.811702 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.811729 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.811737 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.811749 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.811758 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.914111 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.914155 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.914168 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.914203 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:13 crc kubenswrapper[4778]: I0312 13:11:13.914222 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:13Z","lastTransitionTime":"2026-03-12T13:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.016972 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.017068 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.017101 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.017134 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.017155 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.033432 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033583 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:11:30.03355455 +0000 UTC m=+108.482249946 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.033635 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.033711 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.033731 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.033766 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033822 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033833 4778 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033844 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033886 4778 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033821 4778 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033888 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:30.033880848 +0000 UTC m=+108.482576244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033843 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033996 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:30.033975301 +0000 UTC m=+108.482670747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.033996 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.034013 4778 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.034017 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:30.034008722 +0000 UTC m=+108.482704228 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.034045 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:30.034033352 +0000 UTC m=+108.482728818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.119762 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.119808 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.119819 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.119834 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.119847 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.134666 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.134797 4778 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.134870 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs podName:0b59b25a-3acc-4d06-b91d-575f45463520 nodeName:}" failed. No retries permitted until 2026-03-12 13:11:30.134853844 +0000 UTC m=+108.583549240 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs") pod "network-metrics-daemon-rz9vw" (UID: "0b59b25a-3acc-4d06-b91d-575f45463520") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.222603 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.222652 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.222662 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.222681 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.222691 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.253143 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.253297 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.253473 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.253481 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.253489 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.253601 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.254248 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.254359 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.254534 4778 scope.go:117] "RemoveContainer" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" Mar 12 13:11:14 crc kubenswrapper[4778]: E0312 13:11:14.254793 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.325583 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.325614 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.325630 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.325646 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.325657 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.428346 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.428415 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.428428 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.428446 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.428462 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.531174 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.531258 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.531270 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.531289 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.531305 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.634630 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.634703 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.634715 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.634729 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.634739 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.737086 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.737157 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.737232 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.737262 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.737282 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.840597 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.840898 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.840918 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.840955 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.840972 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.943085 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.943133 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.943144 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.943161 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:14 crc kubenswrapper[4778]: I0312 13:11:14.943172 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:14Z","lastTransitionTime":"2026-03-12T13:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.045840 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.045885 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.045898 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.045914 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.045927 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.148045 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.148107 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.148119 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.148134 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.148144 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.250293 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.250353 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.250368 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.250391 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.250407 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.353096 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.353153 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.353165 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.353198 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.353212 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.456332 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.456383 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.456400 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.456419 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.456432 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.562367 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.562427 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.562441 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.562462 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.562479 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.665249 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.665291 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.665303 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.665319 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.665331 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.768730 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.768816 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.768831 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.768862 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.768944 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.871009 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.871050 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.871058 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.871074 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.871085 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.974064 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.974116 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.974126 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.974146 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:15 crc kubenswrapper[4778]: I0312 13:11:15.974158 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:15Z","lastTransitionTime":"2026-03-12T13:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.076566 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.076611 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.076622 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.076638 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.076649 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.178936 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.178976 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.178986 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.179000 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.179009 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.253113 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.253144 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.253113 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.253169 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:16 crc kubenswrapper[4778]: E0312 13:11:16.253280 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:16 crc kubenswrapper[4778]: E0312 13:11:16.253497 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:16 crc kubenswrapper[4778]: E0312 13:11:16.253521 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:16 crc kubenswrapper[4778]: E0312 13:11:16.253587 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.281713 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.281752 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.281763 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.281780 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.281793 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.385303 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.385360 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.385376 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.385398 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.385414 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.488872 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.488932 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.488947 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.488969 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.488982 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.591966 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.592012 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.592024 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.592041 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.592056 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.694747 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.694800 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.694818 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.694842 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.694859 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.797755 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.797816 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.797834 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.797859 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.797877 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.900323 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.900378 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.900394 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.900414 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:16 crc kubenswrapper[4778]: I0312 13:11:16.900430 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:16Z","lastTransitionTime":"2026-03-12T13:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.002796 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.002853 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.002870 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.002892 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.002910 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.105083 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.105114 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.105123 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.105136 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.105145 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.207492 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.207546 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.207563 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.207581 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.207594 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.310756 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.310797 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.310809 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.310824 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.310833 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.413427 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.413486 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.413505 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.413532 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.413557 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.516602 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.516648 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.516663 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.516681 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.516693 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.619258 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.619314 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.619323 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.619337 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.619345 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.722216 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.722284 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.722303 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.722326 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.722348 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.824830 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.824903 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.824920 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.824944 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.824960 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.928058 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.928109 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.928121 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.928144 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:17 crc kubenswrapper[4778]: I0312 13:11:17.928156 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:17Z","lastTransitionTime":"2026-03-12T13:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.035080 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.035154 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.035169 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.035220 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.035243 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.138757 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.138813 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.138827 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.138845 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.138858 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.241840 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.241915 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.241930 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.241951 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.241964 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.253361 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.253398 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.253446 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.253373 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:18 crc kubenswrapper[4778]: E0312 13:11:18.253607 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:18 crc kubenswrapper[4778]: E0312 13:11:18.253714 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:18 crc kubenswrapper[4778]: E0312 13:11:18.253803 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:18 crc kubenswrapper[4778]: E0312 13:11:18.253928 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.345598 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.345639 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.345649 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.345666 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.345677 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.449760 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.450383 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.450423 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.450451 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.450468 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.553777 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.554242 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.554339 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.554500 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.554608 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.657754 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.657821 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.657838 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.657861 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.657877 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.760579 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.760644 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.760653 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.760671 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.760681 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.862626 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.862659 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.862682 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.862696 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.862704 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.966384 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.966479 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.966544 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.966571 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:18 crc kubenswrapper[4778]: I0312 13:11:18.966637 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:18Z","lastTransitionTime":"2026-03-12T13:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.069798 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.070367 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.070463 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.070588 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.070687 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.174774 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.174830 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.174842 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.174866 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.174880 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.278780 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.278828 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.278842 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.278863 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.278880 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.408980 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.409050 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.409064 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.409092 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.409104 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.511942 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.511995 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.512030 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.512050 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.512063 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.615021 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.615080 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.615117 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.615136 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.615149 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.718330 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.718365 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.718376 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.718395 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.718414 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.821289 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.821326 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.821337 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.821353 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.821365 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.923867 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.923909 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.923920 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.923935 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:19 crc kubenswrapper[4778]: I0312 13:11:19.923945 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:19Z","lastTransitionTime":"2026-03-12T13:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.026319 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.026372 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.026382 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.026396 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.026406 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.128824 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.128865 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.128873 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.128887 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.128898 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.230901 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.230959 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.230974 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.230992 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.231004 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.253335 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.253434 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:20 crc kubenswrapper[4778]: E0312 13:11:20.253497 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.253335 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.253345 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:20 crc kubenswrapper[4778]: E0312 13:11:20.253688 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:20 crc kubenswrapper[4778]: E0312 13:11:20.253777 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:20 crc kubenswrapper[4778]: E0312 13:11:20.253584 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.333610 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.333659 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.333673 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.333691 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.333702 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.437046 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.437107 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.437130 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.437161 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.437217 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.539553 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.539689 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.539709 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.539732 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.539749 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.642334 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.642362 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.642369 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.642381 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.642389 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.744418 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.744486 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.744524 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.744541 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.744550 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.847119 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.847356 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.847397 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.847426 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.847448 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.950430 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.950477 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.950488 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.950505 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:20 crc kubenswrapper[4778]: I0312 13:11:20.950517 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:20Z","lastTransitionTime":"2026-03-12T13:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.052426 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.052476 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.052489 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.052506 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.052518 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.155143 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.155201 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.155214 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.155228 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.155240 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.254825 4778 scope.go:117] "RemoveContainer" containerID="3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.258076 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.258120 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.258132 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.258147 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.258160 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.361098 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.361172 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.361286 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.361315 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.361328 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.463536 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.463579 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.463588 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.463604 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.463614 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.566010 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.566073 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.566091 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.566111 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.566125 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.668850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.668899 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.668915 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.668935 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.668950 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.785866 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.785904 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.785917 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.785932 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.785968 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.791498 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/1.log" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.794026 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.796728 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.808145 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.826340 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.841574 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.852068 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.860929 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.874073 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.887156 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.888206 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.888333 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.888442 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.888584 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.888691 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.902668 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.914499 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.929490 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.948433 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.961446 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.975783 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.991493 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.991552 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.991565 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.991581 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.991593 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:21Z","lastTransitionTime":"2026-03-12T13:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:21 crc kubenswrapper[4778]: I0312 13:11:21.994050 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:09Z\\\",\\\"message\\\":\\\"nil\\\\u003e UUID: UUIDName:}]\\\\nI0312 13:11:09.567525 6707 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0312 13:11:09.567216 6707 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:09.567573 6707 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Lo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:21Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.005304 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.026284 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.093908 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.093980 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.093996 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.094046 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.094065 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.195955 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.195996 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.196004 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.196022 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.196032 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.253276 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.253402 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.253436 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.253466 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.253541 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.253439 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.253612 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.253649 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.268613 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.289814 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.299657 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.299738 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.299762 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.299793 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.299816 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.304348 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.320018 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.337780 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.350884 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.370612 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.386822 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.402217 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.402270 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.402306 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.402329 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.402345 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.406359 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:09Z\\\",\\\"message\\\":\\\"nil\\\\u003e UUID: UUIDName:}]\\\\nI0312 13:11:09.567525 6707 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0312 13:11:09.567216 6707 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:09.567573 6707 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Lo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.420476 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.434819 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.447253 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.463767 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.480392 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.498262 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.504698 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.504746 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.504759 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.504779 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.504791 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.517672 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.538290 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.538343 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.538353 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.538371 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.538388 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.554207 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.559611 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.559673 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.559690 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.559712 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.559726 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.574221 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.578166 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.578223 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.578241 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.578262 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.578276 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.590875 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.594827 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.594879 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.594891 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.594910 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.594924 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.609176 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.613984 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.614037 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.614051 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.614107 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.614123 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.633447 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.633617 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.635754 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.635819 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.635832 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.635849 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.635859 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.738817 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.738882 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.738902 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.738927 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.738945 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.801177 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/2.log" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.802213 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/1.log" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.805813 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824" exitCode=1 Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.805875 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.806270 4778 scope.go:117] "RemoveContainer" containerID="3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.806744 4778 scope.go:117] "RemoveContainer" containerID="cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824" Mar 12 13:11:22 crc kubenswrapper[4778]: E0312 13:11:22.806975 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.821961 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.839700 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.843228 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.843299 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.843311 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.843331 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.843381 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.854420 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.868101 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.881942 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.898424 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.918728 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.930533 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.945466 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.946367 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.946436 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.946457 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.946482 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.946502 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:22Z","lastTransitionTime":"2026-03-12T13:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.964913 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fa32715eae6ff23b04c9b3865505ff2ed911d459033b9d6912866b5df2f8d22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:09Z\\\",\\\"message\\\":\\\"nil\\\\u003e UUID: UUIDName:}]\\\\nI0312 13:11:09.567525 6707 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.92 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {73135118-cf1b-4568-bd31-2f50308bf69d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0312 13:11:09.567216 6707 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:09Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:09.567573 6707 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Lo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.977853 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:22 crc kubenswrapper[4778]: I0312 13:11:22.994717 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.009512 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.025859 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.038653 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.049272 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.049692 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.049786 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.049823 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.049865 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.049913 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.154585 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.154692 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.154710 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.154736 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.154765 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.257620 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.257687 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.257701 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.257724 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.257739 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.361281 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.361361 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.361376 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.361393 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.361406 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.464161 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.464288 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.464311 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.464341 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.464363 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.566700 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.566768 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.566778 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.566792 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.566802 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.669140 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.669231 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.669256 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.669283 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.669308 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.772217 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.772283 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.772309 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.772333 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.772379 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.810510 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/2.log" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.813493 4778 scope.go:117] "RemoveContainer" containerID="cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824" Mar 12 13:11:23 crc kubenswrapper[4778]: E0312 13:11:23.813633 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.826469 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.835911 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.845365 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.857327 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.867094 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.874231 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.874306 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.874320 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.874338 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.874352 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.880990 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.898367 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.910094 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.925580 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.939448 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.950494 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.960435 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.970156 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.976989 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.977052 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.977064 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.977078 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.977087 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:23Z","lastTransitionTime":"2026-03-12T13:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.981060 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:23 crc kubenswrapper[4778]: I0312 13:11:23.994676 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:23Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.008879 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:24Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.079407 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.079445 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.079458 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.079473 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.079483 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.183928 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.183985 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.184002 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.184023 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.184041 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.253082 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.253152 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.253168 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:24 crc kubenswrapper[4778]: E0312 13:11:24.253282 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.253300 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:24 crc kubenswrapper[4778]: E0312 13:11:24.253408 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:24 crc kubenswrapper[4778]: E0312 13:11:24.253516 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:24 crc kubenswrapper[4778]: E0312 13:11:24.254066 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.267636 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.287236 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.287303 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.287316 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.287333 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.287346 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.390003 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.390051 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.390063 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.390077 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.390088 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.492327 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.492367 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.492376 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.492389 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.492399 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.595370 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.595410 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.595421 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.595436 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.595447 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.698213 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.698280 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.698304 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.698340 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.698362 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.801002 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.801050 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.801063 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.801078 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.801089 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.904026 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.904307 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.904367 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.904435 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:24 crc kubenswrapper[4778]: I0312 13:11:24.904550 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:24Z","lastTransitionTime":"2026-03-12T13:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.008046 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.008116 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.008137 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.008160 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.008177 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.110979 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.111324 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.111476 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.111669 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.111857 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.214712 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.214768 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.214785 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.214807 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.214825 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.254097 4778 scope.go:117] "RemoveContainer" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.320523 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.320588 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.320601 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.320619 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.320631 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.423230 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.423280 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.423291 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.423310 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.423324 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.526637 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.526679 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.526691 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.526709 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.526721 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.634634 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.634668 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.634682 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.634702 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.634717 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.737984 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.738047 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.738068 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.738095 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.738117 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.822853 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.826827 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.827678 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.840381 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.840419 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.840429 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.840445 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.840456 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.845706 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.859242 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.869941 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.879099 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.890852 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.913780 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.928630 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.939279 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.942643 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.942674 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.942682 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.942695 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.942704 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:25Z","lastTransitionTime":"2026-03-12T13:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.951705 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.965559 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.979490 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:25 crc kubenswrapper[4778]: I0312 13:11:25.992219 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:25Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.005204 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:26Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.019668 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:26Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.038740 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:26Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.044737 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.044766 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.044775 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.044832 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.044842 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.049684 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:26Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.062465 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:26Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.147920 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.147976 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.147993 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.148018 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.148032 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.250242 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.250297 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.250315 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.250339 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.250355 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.252806 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.252841 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.252972 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:26 crc kubenswrapper[4778]: E0312 13:11:26.253063 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.253082 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:26 crc kubenswrapper[4778]: E0312 13:11:26.253245 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:26 crc kubenswrapper[4778]: E0312 13:11:26.253451 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:26 crc kubenswrapper[4778]: E0312 13:11:26.253612 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.352839 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.352874 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.352886 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.352899 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.352907 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.455142 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.455170 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.455191 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.455204 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.455214 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.558039 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.558087 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.558100 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.558117 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.558129 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.660702 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.660754 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.660767 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.660785 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.660798 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.763797 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.763850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.763864 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.763894 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.763911 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.866901 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.866943 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.866956 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.866973 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.866984 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.969045 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.969086 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.969099 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.969116 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:26 crc kubenswrapper[4778]: I0312 13:11:26.969130 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:26Z","lastTransitionTime":"2026-03-12T13:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.071607 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.071648 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.071659 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.071674 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.071685 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.174443 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.174497 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.174506 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.174519 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.174530 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.277322 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.277824 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.277889 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.277986 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.278059 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.380622 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.380831 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.380926 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.381004 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.381084 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.483495 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.483547 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.483564 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.483589 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.483607 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.586298 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.586373 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.586394 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.586427 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.586450 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.689616 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.690035 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.690366 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.690577 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.690772 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.793472 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.793744 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.793843 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.793941 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.794027 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.897955 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.898022 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.898041 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.898065 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:27 crc kubenswrapper[4778]: I0312 13:11:27.898083 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:27Z","lastTransitionTime":"2026-03-12T13:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.000404 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.000453 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.000464 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.000480 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.000491 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.103796 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.103853 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.103871 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.103895 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.103916 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.206739 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.207099 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.207205 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.207399 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.207591 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.252911 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.252912 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.252998 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.253063 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:28 crc kubenswrapper[4778]: E0312 13:11:28.253247 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:28 crc kubenswrapper[4778]: E0312 13:11:28.253430 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:28 crc kubenswrapper[4778]: E0312 13:11:28.253573 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:28 crc kubenswrapper[4778]: E0312 13:11:28.253740 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.310218 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.310276 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.310294 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.310316 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.310328 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.413823 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.413869 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.413883 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.413902 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.413918 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.516675 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.516721 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.516733 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.516749 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.516762 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.619211 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.619259 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.619273 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.619291 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.619302 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.721687 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.721727 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.721739 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.721755 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.721767 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.824074 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.824131 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.824147 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.824170 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.824212 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.926541 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.926619 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.926634 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.926652 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:28 crc kubenswrapper[4778]: I0312 13:11:28.926692 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:28Z","lastTransitionTime":"2026-03-12T13:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.029371 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.029604 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.029704 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.029774 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.029830 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.132432 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.132476 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.132488 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.132504 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.132515 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.235706 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.235768 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.235780 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.235798 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.235809 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.337854 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.337908 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.337919 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.337937 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.337950 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.440082 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.440402 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.440474 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.440747 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.440806 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.543199 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.543246 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.543258 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.543285 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.543330 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.646581 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.646616 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.646628 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.646646 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.646659 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.749771 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.749832 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.749850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.749872 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.749889 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.852950 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.853013 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.853025 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.853046 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.853059 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.955520 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.955564 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.955575 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.955590 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:29 crc kubenswrapper[4778]: I0312 13:11:29.955601 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:29Z","lastTransitionTime":"2026-03-12T13:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.057990 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.058051 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.058067 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.058087 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.058102 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.101096 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.101340 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101392 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:02.101347647 +0000 UTC m=+140.550043073 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.101537 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101593 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101623 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101634 4778 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101657 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101670 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101678 4778 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101686 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 13:12:02.101670376 +0000 UTC m=+140.550365822 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101693 4778 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101709 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 13:12:02.101700487 +0000 UTC m=+140.550395883 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.101597 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101750 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:12:02.101735707 +0000 UTC m=+140.550431293 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.101773 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101837 4778 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.101861 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:12:02.101854081 +0000 UTC m=+140.550549477 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.160680 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.160729 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.160740 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.160758 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.160771 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.202797 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.203030 4778 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.203137 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs podName:0b59b25a-3acc-4d06-b91d-575f45463520 nodeName:}" failed. No retries permitted until 2026-03-12 13:12:02.203113344 +0000 UTC m=+140.651808780 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs") pod "network-metrics-daemon-rz9vw" (UID: "0b59b25a-3acc-4d06-b91d-575f45463520") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.253803 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.253842 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.253873 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.253934 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.253800 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.254119 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.254595 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:30 crc kubenswrapper[4778]: E0312 13:11:30.254761 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.262125 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.262228 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.262255 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.262282 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.262302 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.364753 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.364805 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.364814 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.364827 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.364835 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.466803 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.467160 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.467322 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.467442 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.467531 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.570289 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.570353 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.570372 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.570395 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.570413 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.673147 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.673217 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.673228 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.673246 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.673257 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.775936 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.775981 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.775994 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.776010 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.776022 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.877926 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.877962 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.877973 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.877988 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.878013 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.981411 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.981455 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.981467 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.981484 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:30 crc kubenswrapper[4778]: I0312 13:11:30.981495 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:30Z","lastTransitionTime":"2026-03-12T13:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.083773 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.083821 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.083841 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.083859 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.083872 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.186783 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.187098 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.187111 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.187149 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.187164 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.289997 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.290043 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.290056 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.290074 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.290085 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.392988 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.393021 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.393032 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.393045 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.393054 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.495762 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.495806 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.495817 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.495835 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.495847 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.598494 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.598548 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.598558 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.598576 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.598585 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.701572 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.701626 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.701644 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.701663 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.701724 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.807672 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.807716 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.807726 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.807742 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.807752 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.910831 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.910896 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.910908 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.910924 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:31 crc kubenswrapper[4778]: I0312 13:11:31.910936 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:31Z","lastTransitionTime":"2026-03-12T13:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.014204 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.014251 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.014261 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.014278 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.014289 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.117149 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.117522 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.117679 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.117853 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.118041 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.220753 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.220796 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.220808 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.220824 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.220836 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.253293 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.253435 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.253722 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.253822 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.253924 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.254004 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.254020 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.254233 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.267059 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.289565 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.302324 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.315108 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.323526 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.323743 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.323859 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.323949 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.324036 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.328741 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.340920 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.356129 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.370555 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.381820 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.394154 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.413499 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.425574 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.426960 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.426977 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.426986 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.426999 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.427009 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.447323 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.465145 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.480943 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.493324 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.505460 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.529817 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.529874 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.529884 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.529905 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.529919 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.632524 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.632575 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.632587 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.632606 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.632639 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.735547 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.735602 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.735614 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.735634 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.735651 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.838446 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.838545 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.838566 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.838590 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.838607 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.862983 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.863046 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.863064 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.863090 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.863113 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.881365 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.886449 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.886514 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.886528 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.886543 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.886554 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.899585 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.904932 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.905020 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.905461 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.905545 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.905823 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.925598 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.930418 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.930483 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.930499 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.930518 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.930810 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.946929 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.951653 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.951714 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.951735 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.951760 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.951779 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.974267 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:32Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:32 crc kubenswrapper[4778]: E0312 13:11:32.974685 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.976563 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.976616 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.976627 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.976647 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:32 crc kubenswrapper[4778]: I0312 13:11:32.976659 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:32Z","lastTransitionTime":"2026-03-12T13:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.079423 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.079503 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.079522 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.079545 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.079563 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.183218 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.183288 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.183302 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.183322 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.183334 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.286408 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.286457 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.286468 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.286486 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.286498 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.389478 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.389754 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.389835 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.389948 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.390243 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.493310 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.493347 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.493358 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.493375 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.493386 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.596192 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.596447 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.596510 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.596571 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.596634 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.699331 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.699371 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.699382 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.699396 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.699406 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.801625 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.801671 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.801686 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.801706 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.801721 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.905012 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.905246 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.905315 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.905375 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:33 crc kubenswrapper[4778]: I0312 13:11:33.905459 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:33Z","lastTransitionTime":"2026-03-12T13:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.008201 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.008473 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.008562 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.008661 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.008738 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.111514 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.111565 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.111582 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.111603 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.111619 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.214372 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.214441 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.214462 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.214489 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.214512 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.253706 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.253776 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:34 crc kubenswrapper[4778]: E0312 13:11:34.253845 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.253879 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.253706 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:34 crc kubenswrapper[4778]: E0312 13:11:34.253926 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:34 crc kubenswrapper[4778]: E0312 13:11:34.254001 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:34 crc kubenswrapper[4778]: E0312 13:11:34.254054 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.318340 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.318397 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.318413 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.318435 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.318446 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.420833 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.420888 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.420905 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.420928 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.420945 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.524317 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.524368 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.524380 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.524399 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.524410 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.627913 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.627964 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.627976 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.627992 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.628005 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.731090 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.731152 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.731162 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.731177 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.731208 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.833986 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.834093 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.834103 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.834117 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.834125 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.936798 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.936831 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.936841 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.936854 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:34 crc kubenswrapper[4778]: I0312 13:11:34.936863 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:34Z","lastTransitionTime":"2026-03-12T13:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.039156 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.039236 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.039248 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.039263 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.039272 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.149699 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.149746 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.149759 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.149776 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.149787 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.252251 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.252288 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.252297 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.252310 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.252319 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.354644 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.354714 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.354733 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.354759 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.354777 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.457237 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.457289 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.457301 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.457318 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.457330 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.559664 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.559745 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.559773 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.559803 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.559826 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.661997 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.662232 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.662313 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.662429 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.662527 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.765431 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.765497 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.765520 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.765548 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.765569 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.868837 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.868898 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.868915 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.868934 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.868947 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.972028 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.972073 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.972090 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.972105 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:35 crc kubenswrapper[4778]: I0312 13:11:35.972114 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:35Z","lastTransitionTime":"2026-03-12T13:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.074354 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.074389 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.074401 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.074419 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.074431 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.177851 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.177931 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.177954 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.177986 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.178010 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.253010 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:36 crc kubenswrapper[4778]: E0312 13:11:36.253452 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.253124 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:36 crc kubenswrapper[4778]: E0312 13:11:36.253675 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.253080 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:36 crc kubenswrapper[4778]: E0312 13:11:36.253842 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.253128 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:36 crc kubenswrapper[4778]: E0312 13:11:36.254021 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.280518 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.280741 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.280800 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.280859 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.280911 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.383164 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.383238 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.383249 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.383265 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.383277 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.485836 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.485895 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.485907 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.485925 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.485937 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.589905 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.589970 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.589981 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.589997 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.590028 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.692802 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.692865 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.692882 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.692907 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.692924 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.795652 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.795697 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.795710 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.795726 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.795735 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.898657 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.898702 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.898720 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.898744 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:36 crc kubenswrapper[4778]: I0312 13:11:36.898764 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:36Z","lastTransitionTime":"2026-03-12T13:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.001467 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.001541 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.001553 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.001571 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.001582 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.103792 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.103838 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.103848 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.103860 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.103869 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.206095 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.206176 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.206224 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.206240 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.206254 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.254092 4778 scope.go:117] "RemoveContainer" containerID="cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824" Mar 12 13:11:37 crc kubenswrapper[4778]: E0312 13:11:37.254297 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.308757 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.308998 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.309081 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.309148 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.309228 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.411385 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.411468 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.411484 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.411507 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.411523 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.514931 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.516001 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.516344 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.516454 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.516556 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.618843 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.619450 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.619521 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.619630 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.619699 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.722619 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.722679 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.722692 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.722710 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.722722 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.825433 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.825484 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.825495 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.825511 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.825522 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.927972 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.928046 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.928059 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.928100 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:37 crc kubenswrapper[4778]: I0312 13:11:37.928126 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:37Z","lastTransitionTime":"2026-03-12T13:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.030504 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.030547 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.030559 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.030574 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.030586 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.132974 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.133033 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.133042 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.133057 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.133067 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.235308 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.235345 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.235357 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.235371 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.235382 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.253742 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.253771 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.253774 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.253787 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:38 crc kubenswrapper[4778]: E0312 13:11:38.253859 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:38 crc kubenswrapper[4778]: E0312 13:11:38.254000 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:38 crc kubenswrapper[4778]: E0312 13:11:38.254066 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:38 crc kubenswrapper[4778]: E0312 13:11:38.254215 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.337328 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.337365 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.337376 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.337390 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.337401 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.439663 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.439704 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.439714 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.439730 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.439745 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.542687 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.542752 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.542772 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.542799 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.542820 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.645035 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.645610 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.645705 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.645790 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.645884 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.749348 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.749403 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.749418 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.749437 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.749457 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.852403 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.852669 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.852737 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.852815 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.852875 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.955649 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.955731 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.955744 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.955766 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:38 crc kubenswrapper[4778]: I0312 13:11:38.955780 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:38Z","lastTransitionTime":"2026-03-12T13:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.057755 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.057784 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.057792 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.057805 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.057816 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.160809 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.160881 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.160902 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.160930 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.160953 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.263824 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.263858 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.263869 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.263884 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.263895 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.372342 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.372386 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.372395 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.372409 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.372419 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.474477 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.474529 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.474547 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.474565 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.474578 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.576866 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.576959 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.576982 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.577012 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.577033 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.679537 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.679586 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.679599 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.679616 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.679631 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.782048 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.782089 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.782100 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.782115 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.782127 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.884591 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.884638 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.884646 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.884663 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.884673 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.987749 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.987816 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.987832 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.987865 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:39 crc kubenswrapper[4778]: I0312 13:11:39.987879 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:39Z","lastTransitionTime":"2026-03-12T13:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.090386 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.090430 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.090443 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.090460 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.090472 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.192869 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.192911 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.192923 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.192939 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.192948 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.252929 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.253049 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.253135 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:40 crc kubenswrapper[4778]: E0312 13:11:40.253083 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:40 crc kubenswrapper[4778]: E0312 13:11:40.253327 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:40 crc kubenswrapper[4778]: E0312 13:11:40.253445 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.253718 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:40 crc kubenswrapper[4778]: E0312 13:11:40.253787 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.295940 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.295976 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.295985 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.296000 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.296010 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.399886 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.399953 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.399976 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.400003 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.400024 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.502932 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.502958 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.502966 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.502979 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.502988 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.606106 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.606170 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.606218 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.606247 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.606265 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.709604 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.709641 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.709652 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.709668 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.709679 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.812160 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.812223 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.812232 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.812247 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.812257 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.915628 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.915660 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.915670 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.915683 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:40 crc kubenswrapper[4778]: I0312 13:11:40.915692 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:40Z","lastTransitionTime":"2026-03-12T13:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.018116 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.018144 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.018152 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.018165 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.018176 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.120365 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.120412 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.120428 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.120453 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.120466 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.222783 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.222823 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.222833 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.222850 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.222860 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.325234 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.325294 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.325309 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.325329 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.325345 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.429239 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.429290 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.429301 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.429318 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.429329 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.532558 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.532622 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.532635 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.532661 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.532675 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.637482 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.637541 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.637557 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.637578 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.637592 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.739691 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.739736 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.739749 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.739765 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.739777 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.842511 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.842556 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.842566 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.842580 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.842590 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.944703 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.944744 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.944754 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.944769 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:41 crc kubenswrapper[4778]: I0312 13:11:41.944779 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:41Z","lastTransitionTime":"2026-03-12T13:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.047388 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.047426 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.047434 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.047447 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.047456 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:42Z","lastTransitionTime":"2026-03-12T13:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.149733 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.149774 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.149783 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.149797 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.149806 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:42Z","lastTransitionTime":"2026-03-12T13:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:42 crc kubenswrapper[4778]: E0312 13:11:42.250927 4778 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.253396 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.253425 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.253455 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.253448 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:42 crc kubenswrapper[4778]: E0312 13:11:42.253658 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:42 crc kubenswrapper[4778]: E0312 13:11:42.253761 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:42 crc kubenswrapper[4778]: E0312 13:11:42.253838 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:42 crc kubenswrapper[4778]: E0312 13:11:42.253890 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.278924 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.293519 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.304841 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.316880 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.332659 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.347454 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.358357 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: E0312 13:11:42.368061 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.373980 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.385952 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.394444 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.408298 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.423492 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.443572 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.457627 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.468048 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.477155 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:42 crc kubenswrapper[4778]: I0312 13:11:42.488229 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:42Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.323661 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.323742 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.323756 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.323779 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.323801 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:43Z","lastTransitionTime":"2026-03-12T13:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:43 crc kubenswrapper[4778]: E0312 13:11:43.339597 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.343241 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.343369 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.343401 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.343426 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.343444 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:43Z","lastTransitionTime":"2026-03-12T13:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:43 crc kubenswrapper[4778]: E0312 13:11:43.360701 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.365508 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.365563 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.365583 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.365606 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.365636 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:43Z","lastTransitionTime":"2026-03-12T13:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:43 crc kubenswrapper[4778]: E0312 13:11:43.384968 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.396962 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.397020 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.397032 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.397050 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.397063 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:43Z","lastTransitionTime":"2026-03-12T13:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:43 crc kubenswrapper[4778]: E0312 13:11:43.415530 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.419338 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.419379 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.419389 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.419417 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.419426 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:43Z","lastTransitionTime":"2026-03-12T13:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:43 crc kubenswrapper[4778]: E0312 13:11:43.432964 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: E0312 13:11:43.433173 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.542534 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.559985 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.572887 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.587937 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.601465 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.613149 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.627172 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.639178 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.661685 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.673590 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.685324 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.697215 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.709179 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.721923 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.734367 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.745600 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.755545 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:43 crc kubenswrapper[4778]: I0312 13:11:43.774294 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:43Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:44 crc kubenswrapper[4778]: I0312 13:11:44.252993 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:44 crc kubenswrapper[4778]: I0312 13:11:44.253104 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:44 crc kubenswrapper[4778]: E0312 13:11:44.253177 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:44 crc kubenswrapper[4778]: E0312 13:11:44.253385 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:44 crc kubenswrapper[4778]: I0312 13:11:44.253456 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:44 crc kubenswrapper[4778]: I0312 13:11:44.253483 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:44 crc kubenswrapper[4778]: E0312 13:11:44.253544 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:44 crc kubenswrapper[4778]: E0312 13:11:44.253634 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:46 crc kubenswrapper[4778]: I0312 13:11:46.253264 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:46 crc kubenswrapper[4778]: I0312 13:11:46.253369 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:46 crc kubenswrapper[4778]: I0312 13:11:46.253436 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:46 crc kubenswrapper[4778]: I0312 13:11:46.253489 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:46 crc kubenswrapper[4778]: E0312 13:11:46.253894 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:46 crc kubenswrapper[4778]: E0312 13:11:46.254312 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:46 crc kubenswrapper[4778]: E0312 13:11:46.254431 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:46 crc kubenswrapper[4778]: E0312 13:11:46.254530 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:47 crc kubenswrapper[4778]: E0312 13:11:47.369378 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.898990 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fhcz6_1e7037a8-a966-4df0-9f94-fe2dd3e2de6e/kube-multus/0.log" Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.899050 4778 generic.go:334] "Generic (PLEG): container finished" podID="1e7037a8-a966-4df0-9f94-fe2dd3e2de6e" containerID="5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac" exitCode=1 Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.899083 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fhcz6" event={"ID":"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e","Type":"ContainerDied","Data":"5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac"} Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.899479 4778 scope.go:117] "RemoveContainer" containerID="5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac" Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.921984 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:47Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.938341 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:47Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.952004 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:47Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.966947 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:47Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.977430 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:47Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:47 crc kubenswrapper[4778]: I0312 13:11:47.990837 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:47Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.009620 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.021098 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.035961 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.050431 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.066766 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:46Z\\\",\\\"message\\\":\\\"2026-03-12T13:11:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972\\\\n2026-03-12T13:11:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972 to /host/opt/cni/bin/\\\\n2026-03-12T13:11:01Z [verbose] multus-daemon started\\\\n2026-03-12T13:11:01Z [verbose] Readiness Indicator file check\\\\n2026-03-12T13:11:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.080663 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.095628 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.108313 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.132728 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.149083 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.162721 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.256569 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:48 crc kubenswrapper[4778]: E0312 13:11:48.256742 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.256996 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:48 crc kubenswrapper[4778]: E0312 13:11:48.257117 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.257945 4778 scope.go:117] "RemoveContainer" containerID="cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.258315 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:48 crc kubenswrapper[4778]: E0312 13:11:48.258380 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.258514 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:48 crc kubenswrapper[4778]: E0312 13:11:48.258588 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.904740 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/2.log" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.907993 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.908949 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.912493 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fhcz6_1e7037a8-a966-4df0-9f94-fe2dd3e2de6e/kube-multus/0.log" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.912529 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fhcz6" event={"ID":"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e","Type":"ContainerStarted","Data":"44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e"} Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.924399 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.943123 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.954120 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.971176 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:48 crc kubenswrapper[4778]: I0312 13:11:48.985957 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.001658 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:46Z\\\",\\\"message\\\":\\\"2026-03-12T13:11:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972\\\\n2026-03-12T13:11:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972 to /host/opt/cni/bin/\\\\n2026-03-12T13:11:01Z [verbose] multus-daemon started\\\\n2026-03-12T13:11:01Z [verbose] Readiness Indicator file check\\\\n2026-03-12T13:11:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:48Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.014841 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.030084 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.044408 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.068762 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.086644 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.098246 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.114429 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.129047 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.144840 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.158936 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.171816 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.203133 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.224727 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.240864 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.255216 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.271884 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.286918 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.301254 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.313942 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.323647 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.333372 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.345761 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.364401 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.388692 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.402409 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:46Z\\\",\\\"message\\\":\\\"2026-03-12T13:11:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972\\\\n2026-03-12T13:11:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972 to /host/opt/cni/bin/\\\\n2026-03-12T13:11:01Z [verbose] multus-daemon started\\\\n2026-03-12T13:11:01Z [verbose] Readiness Indicator file check\\\\n2026-03-12T13:11:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.416221 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.430854 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.445801 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.918713 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/3.log" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.919359 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/2.log" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.923041 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" exitCode=1 Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.923125 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.923231 4778 scope.go:117] "RemoveContainer" containerID="cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.924061 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:11:49 crc kubenswrapper[4778]: E0312 13:11:49.924341 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.941371 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.953820 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:46Z\\\",\\\"message\\\":\\\"2026-03-12T13:11:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972\\\\n2026-03-12T13:11:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972 to /host/opt/cni/bin/\\\\n2026-03-12T13:11:01Z [verbose] multus-daemon started\\\\n2026-03-12T13:11:01Z [verbose] Readiness Indicator file check\\\\n2026-03-12T13:11:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.964574 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.976778 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:49 crc kubenswrapper[4778]: I0312 13:11:49.986251 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.006973 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.018429 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.032383 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.048687 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.059858 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.070639 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.083428 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.095520 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.109724 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.132310 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc5950d10319c346220cc78cf45052b31ea05a32f6d5f2511a963110c4a17824\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:22Z\\\",\\\"message\\\":\\\"ificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z\\\\nI0312 13:11:22.188287 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF0312 13:11:22.188260 6941 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:22Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:22.188294 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j\\\\nI0312 13:11:22.188297 6941 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-4dfhs\\\\nI031\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:49Z\\\",\\\"message\\\":\\\" network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:49.218951 7264 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"f32857b5-f652-4313-a0d7-455c3156dd99\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.146550 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.161767 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.253733 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.253804 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.253819 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.253805 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:50 crc kubenswrapper[4778]: E0312 13:11:50.253978 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:50 crc kubenswrapper[4778]: E0312 13:11:50.254032 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:50 crc kubenswrapper[4778]: E0312 13:11:50.254092 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:50 crc kubenswrapper[4778]: E0312 13:11:50.254146 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.929593 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/3.log" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.934550 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:11:50 crc kubenswrapper[4778]: E0312 13:11:50.934806 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.949984 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.966948 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:50 crc kubenswrapper[4778]: I0312 13:11:50.989626 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:50Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.020209 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:49Z\\\",\\\"message\\\":\\\" network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:49.218951 7264 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"f32857b5-f652-4313-a0d7-455c3156dd99\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.033297 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:46Z\\\",\\\"message\\\":\\\"2026-03-12T13:11:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972\\\\n2026-03-12T13:11:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972 to /host/opt/cni/bin/\\\\n2026-03-12T13:11:01Z [verbose] multus-daemon started\\\\n2026-03-12T13:11:01Z [verbose] Readiness Indicator file check\\\\n2026-03-12T13:11:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.043874 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.053387 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.064387 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.080815 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.091303 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.101263 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.109948 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.120135 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.131996 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.143670 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.154360 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:51 crc kubenswrapper[4778]: I0312 13:11:51.163246 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:51Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.253964 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:52 crc kubenswrapper[4778]: E0312 13:11:52.254091 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.254360 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:52 crc kubenswrapper[4778]: E0312 13:11:52.254417 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.254642 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.254694 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:52 crc kubenswrapper[4778]: E0312 13:11:52.254744 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:52 crc kubenswrapper[4778]: E0312 13:11:52.254905 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.266311 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.268418 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.270738 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.280747 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.292675 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.307581 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:46Z\\\",\\\"message\\\":\\\"2026-03-12T13:11:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972\\\\n2026-03-12T13:11:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972 to /host/opt/cni/bin/\\\\n2026-03-12T13:11:01Z [verbose] multus-daemon started\\\\n2026-03-12T13:11:01Z [verbose] Readiness Indicator file check\\\\n2026-03-12T13:11:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.324467 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.357785 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: E0312 13:11:52.369865 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.372557 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.393029 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.405457 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.414587 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.427717 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.439940 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.452766 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.469678 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.484869 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.504154 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:49Z\\\",\\\"message\\\":\\\" network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:49.218951 7264 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"f32857b5-f652-4313-a0d7-455c3156dd99\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:52 crc kubenswrapper[4778]: I0312 13:11:52.515914 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:52Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.512108 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.512191 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.512227 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.512244 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.512258 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:53Z","lastTransitionTime":"2026-03-12T13:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:53 crc kubenswrapper[4778]: E0312 13:11:53.528684 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:53Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.533651 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.533706 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.533720 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.533739 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.533751 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:53Z","lastTransitionTime":"2026-03-12T13:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:53 crc kubenswrapper[4778]: E0312 13:11:53.559658 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:53Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.564351 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.564440 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.564452 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.564469 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.564481 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:53Z","lastTransitionTime":"2026-03-12T13:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:53 crc kubenswrapper[4778]: E0312 13:11:53.577795 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:53Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.582825 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.582883 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.582893 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.582909 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.582920 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:53Z","lastTransitionTime":"2026-03-12T13:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:53 crc kubenswrapper[4778]: E0312 13:11:53.596233 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:53Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.600456 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.600502 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.600513 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.600534 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:11:53 crc kubenswrapper[4778]: I0312 13:11:53.600548 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:11:53Z","lastTransitionTime":"2026-03-12T13:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:11:53 crc kubenswrapper[4778]: E0312 13:11:53.613449 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:53Z is after 2025-08-24T17:21:41Z" Mar 12 13:11:53 crc kubenswrapper[4778]: E0312 13:11:53.613611 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:11:54 crc kubenswrapper[4778]: I0312 13:11:54.253754 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:54 crc kubenswrapper[4778]: E0312 13:11:54.253974 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:54 crc kubenswrapper[4778]: I0312 13:11:54.254333 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:54 crc kubenswrapper[4778]: E0312 13:11:54.254414 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:54 crc kubenswrapper[4778]: I0312 13:11:54.254604 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:54 crc kubenswrapper[4778]: E0312 13:11:54.254719 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:54 crc kubenswrapper[4778]: I0312 13:11:54.254598 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:54 crc kubenswrapper[4778]: E0312 13:11:54.254850 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:56 crc kubenswrapper[4778]: I0312 13:11:56.253721 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:56 crc kubenswrapper[4778]: I0312 13:11:56.253726 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:56 crc kubenswrapper[4778]: I0312 13:11:56.253866 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:56 crc kubenswrapper[4778]: E0312 13:11:56.254097 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:11:56 crc kubenswrapper[4778]: E0312 13:11:56.254132 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:56 crc kubenswrapper[4778]: E0312 13:11:56.253863 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:56 crc kubenswrapper[4778]: I0312 13:11:56.254928 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:56 crc kubenswrapper[4778]: E0312 13:11:56.255169 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:57 crc kubenswrapper[4778]: E0312 13:11:57.376984 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:11:58 crc kubenswrapper[4778]: I0312 13:11:58.253153 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:11:58 crc kubenswrapper[4778]: I0312 13:11:58.253252 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:11:58 crc kubenswrapper[4778]: I0312 13:11:58.253342 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:11:58 crc kubenswrapper[4778]: E0312 13:11:58.253493 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:11:58 crc kubenswrapper[4778]: I0312 13:11:58.253567 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:11:58 crc kubenswrapper[4778]: E0312 13:11:58.253687 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:11:58 crc kubenswrapper[4778]: E0312 13:11:58.253839 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:11:58 crc kubenswrapper[4778]: E0312 13:11:58.253965 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:00 crc kubenswrapper[4778]: I0312 13:12:00.253799 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:00 crc kubenswrapper[4778]: I0312 13:12:00.253895 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:00 crc kubenswrapper[4778]: E0312 13:12:00.253936 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:00 crc kubenswrapper[4778]: I0312 13:12:00.253970 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:00 crc kubenswrapper[4778]: E0312 13:12:00.254057 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:00 crc kubenswrapper[4778]: I0312 13:12:00.254156 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:00 crc kubenswrapper[4778]: E0312 13:12:00.254345 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:00 crc kubenswrapper[4778]: E0312 13:12:00.254455 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.164061 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.164314 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.164376 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:13:06.16433742 +0000 UTC m=+204.613032826 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.164442 4778 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.164447 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.164551 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:13:06.164521515 +0000 UTC m=+204.613216931 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.164705 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.164725 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.164739 4778 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.164798 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-12 13:13:06.164788242 +0000 UTC m=+204.613483648 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.164807 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.164870 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.164972 4778 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.165034 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-12 13:13:06.165022479 +0000 UTC m=+204.613717895 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.165125 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.165152 4778 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.165171 4778 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.165277 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-12 13:13:06.165253075 +0000 UTC m=+204.613948511 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.252922 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.252955 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.253276 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.253250 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.253322 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.253624 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.253688 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.253818 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.266718 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.266919 4778 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.267026 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs podName:0b59b25a-3acc-4d06-b91d-575f45463520 nodeName:}" failed. No retries permitted until 2026-03-12 13:13:06.266996119 +0000 UTC m=+204.715691555 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs") pod "network-metrics-daemon-rz9vw" (UID: "0b59b25a-3acc-4d06-b91d-575f45463520") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.275861 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:43Z\\\",\\\"message\\\":\\\"file observer\\\\nW0312 13:10:42.840582 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0312 13:10:42.841010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0312 13:10:42.843036 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-461564172/tls.crt::/tmp/serving-cert-461564172/tls.key\\\\\\\"\\\\nI0312 13:10:43.350873 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0312 13:10:43.364662 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0312 13:10:43.364721 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0312 13:10:43.365498 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0312 13:10:43.365555 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0312 13:10:43.376143 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0312 13:10:43.376224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376255 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0312 13:10:43.376279 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0312 13:10:43.376301 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0312 13:10:43.376324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0312 13:10:43.376350 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0312 13:10:43.376614 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0312 13:10:43.379532 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.303700 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:49Z\\\",\\\"message\\\":\\\" network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:11:49Z is after 2025-08-24T17:21:41Z]\\\\nI0312 13:11:49.218951 7264 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"f32857b5-f652-4313-a0d7-455c3156dd99\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-config-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]s\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-schvw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bcc9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.319722 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qdxm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7050ddd9-aa01-4af7-9046-208f85f50a86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9af31ab4c27bb06d5a44a1c279e04f1b6f243054e271214ef771db4f0dc65e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jspwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qdxm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.341338 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rsshp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a1f8eaa-ac07-4478-be5d-0742de6b43c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f732882ddde9d0d0c1d1ef218276d4e14df3a1b36e4e956912efef4873092b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2a965ab43fc04beab1d08a4b626d0e54db69963e6ca5c498f502f4df90a8057\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://deca991c71450137550fdb82a01b81aaa63e6be64a6d7a96438f6b3d83a8bb5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae3ee2ab1f6fdf579b28f6ddf2010f9ac048dec5c7668dc467152185d4e1028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b22a0b8a6e5c59e8195280cbe1579af847c709f8b6245df5a16df5af602f11d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://253b2ec5086a2db48bb42ae6024bab9ca832325f9d96cd6ff6944ded362161e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cb8f9537926237c4932ef2a9fb701804e03e132f2f56dd9d0e928b7340b1eeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:11:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4hrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rsshp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.356425 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4dfhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfefcab6-a931-413e-8763-0f63f17911cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eecca419cc264b25f1883aac864cc545f0daf973e3b288bc8ea00a8b91e1f124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ssbrl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4dfhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.374820 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7db5517e-3b54-4509-a2e4-fd8fd83f3b79\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7b2a12646c299c75286fc95cf2a8fa35fd83031ce3daebec42030d966274ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cf827947c686099ca3c6afad51d866f4ee1d557bc64cc1c70f6213fd4198df2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-12T13:10:15Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0312 13:09:44.726810 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0312 13:09:44.729454 1 observer_polling.go:159] Starting file observer\\\\nI0312 13:09:44.783273 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0312 13:09:44.796968 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0312 13:10:15.116783 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:10:14Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62d772ee1ff9d986b4311494a08c8763bd91704fda6cd9c6f067c98205a4067d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8141466f1a3447b31eeaeb92f1b2ac9e8ddef4ba3e9a60f2ce9a775c3cce0a5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5918a46253f4d68b9bc62ba4357dd2ae6baff245e6b4ca06e44eb7e9b7af9df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: E0312 13:12:02.378018 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.394049 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.411244 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fhcz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-12T13:11:46Z\\\",\\\"message\\\":\\\"2026-03-12T13:11:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972\\\\n2026-03-12T13:11:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_415a65bb-5a20-4f1d-953d-70a2be5bd972 to /host/opt/cni/bin/\\\\n2026-03-12T13:11:01Z [verbose] multus-daemon started\\\\n2026-03-12T13:11:01Z [verbose] Readiness Indicator file check\\\\n2026-03-12T13:11:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-762lp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fhcz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.428178 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de004a2f-3061-4aae-aa57-389219c71023\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://478fb92ee4748af677ac761928a4173b506a3e56cf622279e2b2a0e322d4aef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d377b0d5d0a854761257d7bc21a111aed96f85d302bf0c024e021f04cc555fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g92p9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sww7j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.441055 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24438fc6-dab0-4a9e-8b97-2532da76d9cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19a882eebff25a2613c68847fcf737648da24f5c8d7648edebb2cb00b6b8950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rhn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2qx88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.457497 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f179525-0f37-4bc0-b853-cdc965ca7af8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfcd0839d0f910ecfd92ecc2db64e4ef06fd90bfda52f24a751f8bf1cf112d8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c2656063b4947b28fa0ac1759e349c80fc039346869b1c1d6daad75e93ad407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://981038910a82f8dc9ffff22e601a748571a56541b59c187d9ce4f5d500febd58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf73fb2fb0de0ce76c16b7db59c94484062b1f4fc5b6df9633c4740f5bbbc0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7cf73fb2fb0de0ce76c16b7db59c94484062b1f4fc5b6df9633c4740f5bbbc0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.469721 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cae155c-6ba6-44c1-9814-759fda7c3c86\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7698145a8f9a3b12ca021d55f406bc6adf7e139c7e32156ced11a20de194608c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ddeb961084ae4041feb2ac05c9fdd2f5c11b4bdc5f5f33878c9ad9e83a2e1a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.488712 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2141104-4933-46fd-9968-0d9498779462\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78e77ceb524173a1cdbf6c93b730412dcd8b6aedcee06c40fb757cc8e738e380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b52b689d66d254a521c980330e792ecbcce1102f39f97d6149bf48ad24c5de2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dc82a592c28b74aef165a164cc4fe4e2e38b6fb48e59f499476a252197e3fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f046d558bf242476327e1ee39ea82ebe104caa081df71caa51a716490d8a6b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e2266711bd32e96e742549772474d9fa43d8f368021e8a7aba3fd1c7b0b87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a347cade99b7bdbe676a020faf0a90b281672f16c4f580455856786ed781d3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3701d4b9c229934646d070a25b4bf944ac544d227ff9ba89fb1885cecfb562de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0d7207d43b4b2bb79583cb1bb2f31034392eb4193b9b3b2f547f474d335250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-12T13:09:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-12T13:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:09:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.499559 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.510761 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59b25a-3acc-4d06-b91d-575f45463520\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdj5w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-12T13:10:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rz9vw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.544638 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4059dae21c8267dcec17364a3073a0f25addb6c308620992e9e609b5f5a32e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c7ffc17b778f7bd099f0cc70b4e8bcfd77f9d45a9a47de9fedbe270a49f2826\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.560864 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:11:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa219bcd71a6f1ae8a889a0409c2bbf61d1efac6a57ad8a22fefe6915e9d15be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.573460 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:02 crc kubenswrapper[4778]: I0312 13:12:02.593228 4778 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-12T13:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb7a47e7099405d73886322b00b013bedee4fb573fa60c9b92d6be3311e65c17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-12T13:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:02Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.254558 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:12:03 crc kubenswrapper[4778]: E0312 13:12:03.254766 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.816433 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.816475 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.816484 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.816499 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.816512 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:12:03Z","lastTransitionTime":"2026-03-12T13:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:12:03 crc kubenswrapper[4778]: E0312 13:12:03.833721 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.839138 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.839237 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.839291 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.839320 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.839356 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:12:03Z","lastTransitionTime":"2026-03-12T13:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:12:03 crc kubenswrapper[4778]: E0312 13:12:03.858719 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.863369 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.863449 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.863470 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.863498 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.863513 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:12:03Z","lastTransitionTime":"2026-03-12T13:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:12:03 crc kubenswrapper[4778]: E0312 13:12:03.880889 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.886650 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.886710 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.886733 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.886757 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.886774 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:12:03Z","lastTransitionTime":"2026-03-12T13:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:12:03 crc kubenswrapper[4778]: E0312 13:12:03.906237 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.910932 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.910980 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.910994 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.911017 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:12:03 crc kubenswrapper[4778]: I0312 13:12:03.911035 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:12:03Z","lastTransitionTime":"2026-03-12T13:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:12:03 crc kubenswrapper[4778]: E0312 13:12:03.927936 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-12T13:12:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9825271f-f529-4477-b3b1-2a00dbf9b03e\\\",\\\"systemUUID\\\":\\\"65870ff3-f0f2-4ca4-b489-075d672e37ad\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-12T13:12:03Z is after 2025-08-24T17:21:41Z" Mar 12 13:12:03 crc kubenswrapper[4778]: E0312 13:12:03.928108 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:12:04 crc kubenswrapper[4778]: I0312 13:12:04.253483 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:04 crc kubenswrapper[4778]: I0312 13:12:04.253542 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:04 crc kubenswrapper[4778]: I0312 13:12:04.253732 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:04 crc kubenswrapper[4778]: E0312 13:12:04.253732 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:04 crc kubenswrapper[4778]: I0312 13:12:04.253801 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:04 crc kubenswrapper[4778]: E0312 13:12:04.253935 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:04 crc kubenswrapper[4778]: E0312 13:12:04.254085 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:04 crc kubenswrapper[4778]: E0312 13:12:04.254251 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:06 crc kubenswrapper[4778]: I0312 13:12:06.253395 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:06 crc kubenswrapper[4778]: I0312 13:12:06.253499 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:06 crc kubenswrapper[4778]: I0312 13:12:06.253406 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:06 crc kubenswrapper[4778]: I0312 13:12:06.253401 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:06 crc kubenswrapper[4778]: E0312 13:12:06.253669 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:06 crc kubenswrapper[4778]: E0312 13:12:06.253813 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:06 crc kubenswrapper[4778]: E0312 13:12:06.254079 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:06 crc kubenswrapper[4778]: E0312 13:12:06.254145 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:07 crc kubenswrapper[4778]: E0312 13:12:07.379608 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:12:08 crc kubenswrapper[4778]: I0312 13:12:08.252771 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:08 crc kubenswrapper[4778]: I0312 13:12:08.252850 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:08 crc kubenswrapper[4778]: E0312 13:12:08.252933 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:08 crc kubenswrapper[4778]: I0312 13:12:08.252951 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:08 crc kubenswrapper[4778]: I0312 13:12:08.253016 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:08 crc kubenswrapper[4778]: E0312 13:12:08.253058 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:08 crc kubenswrapper[4778]: E0312 13:12:08.253145 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:08 crc kubenswrapper[4778]: E0312 13:12:08.253352 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:09 crc kubenswrapper[4778]: I0312 13:12:09.270781 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:09 crc kubenswrapper[4778]: I0312 13:12:09.270750 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:09 crc kubenswrapper[4778]: E0312 13:12:09.271074 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:09 crc kubenswrapper[4778]: E0312 13:12:09.271200 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:10 crc kubenswrapper[4778]: I0312 13:12:10.253491 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:10 crc kubenswrapper[4778]: E0312 13:12:10.253627 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:10 crc kubenswrapper[4778]: I0312 13:12:10.253760 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:10 crc kubenswrapper[4778]: E0312 13:12:10.254236 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:11 crc kubenswrapper[4778]: I0312 13:12:11.253711 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:11 crc kubenswrapper[4778]: I0312 13:12:11.253776 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:11 crc kubenswrapper[4778]: E0312 13:12:11.254880 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:11 crc kubenswrapper[4778]: E0312 13:12:11.254933 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.253680 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:12 crc kubenswrapper[4778]: E0312 13:12:12.253883 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.254442 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:12 crc kubenswrapper[4778]: E0312 13:12:12.254652 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.337378 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.33735782 podStartE2EDuration="1m12.33735782s" podCreationTimestamp="2026-03-12 13:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.293634319 +0000 UTC m=+150.742329785" watchObservedRunningTime="2026-03-12 13:12:12.33735782 +0000 UTC m=+150.786053216" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.365908 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qdxm2" podStartSLOduration=101.365877553 podStartE2EDuration="1m41.365877553s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.350262843 +0000 UTC m=+150.798958239" watchObservedRunningTime="2026-03-12 13:12:12.365877553 +0000 UTC m=+150.814572949" Mar 12 13:12:12 crc kubenswrapper[4778]: E0312 13:12:12.380118 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.389524 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=20.389507858 podStartE2EDuration="20.389507858s" podCreationTimestamp="2026-03-12 13:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.387894773 +0000 UTC m=+150.836590189" watchObservedRunningTime="2026-03-12 13:12:12.389507858 +0000 UTC m=+150.838203254" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.389811 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rsshp" podStartSLOduration=101.389805836 podStartE2EDuration="1m41.389805836s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.36649883 +0000 UTC m=+150.815194226" watchObservedRunningTime="2026-03-12 13:12:12.389805836 +0000 UTC m=+150.838501232" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.433081 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fhcz6" podStartSLOduration=101.433058174 podStartE2EDuration="1m41.433058174s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.42155769 +0000 UTC m=+150.870253096" watchObservedRunningTime="2026-03-12 13:12:12.433058174 +0000 UTC m=+150.881753570" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.433775 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sww7j" podStartSLOduration=101.433769354 podStartE2EDuration="1m41.433769354s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.433060524 +0000 UTC m=+150.881755920" watchObservedRunningTime="2026-03-12 13:12:12.433769354 +0000 UTC m=+150.882464750" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.461853 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4dfhs" podStartSLOduration=101.461835304 podStartE2EDuration="1m41.461835304s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.4450112 +0000 UTC m=+150.893706606" watchObservedRunningTime="2026-03-12 13:12:12.461835304 +0000 UTC m=+150.910530700" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.482061 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=20.482030203 podStartE2EDuration="20.482030203s" podCreationTimestamp="2026-03-12 13:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.470276032 +0000 UTC m=+150.918971448" watchObservedRunningTime="2026-03-12 13:12:12.482030203 +0000 UTC m=+150.930725619" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.482390 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=68.482383702 podStartE2EDuration="1m8.482383702s" podCreationTimestamp="2026-03-12 13:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.481515198 +0000 UTC m=+150.930210624" watchObservedRunningTime="2026-03-12 13:12:12.482383702 +0000 UTC m=+150.931079098" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.507551 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=48.5075301 podStartE2EDuration="48.5075301s" podCreationTimestamp="2026-03-12 13:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.506285555 +0000 UTC m=+150.954980981" watchObservedRunningTime="2026-03-12 13:12:12.5075301 +0000 UTC m=+150.956225496" Mar 12 13:12:12 crc kubenswrapper[4778]: I0312 13:12:12.535001 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podStartSLOduration=101.534978373 podStartE2EDuration="1m41.534978373s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:12.53487349 +0000 UTC m=+150.983568886" watchObservedRunningTime="2026-03-12 13:12:12.534978373 +0000 UTC m=+150.983673779" Mar 12 13:12:13 crc kubenswrapper[4778]: I0312 13:12:13.253059 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:13 crc kubenswrapper[4778]: I0312 13:12:13.253202 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:13 crc kubenswrapper[4778]: E0312 13:12:13.253288 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:13 crc kubenswrapper[4778]: E0312 13:12:13.253419 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.136581 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.136646 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.136664 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.136692 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.136711 4778 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T13:12:14Z","lastTransitionTime":"2026-03-12T13:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.197724 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv"] Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.198557 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.200817 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.202281 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.202959 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.203713 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.239153 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.239281 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.239405 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.239490 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.239659 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.253385 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.253440 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:14 crc kubenswrapper[4778]: E0312 13:12:14.253688 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:14 crc kubenswrapper[4778]: E0312 13:12:14.253919 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.315667 4778 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.337741 4778 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.340879 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.340940 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.341013 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.341039 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.341060 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.341053 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.341256 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.342123 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.348938 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.364165 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g7sbv\" (UID: \"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: I0312 13:12:14.518687 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" Mar 12 13:12:14 crc kubenswrapper[4778]: W0312 13:12:14.542503 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d96bbda_3294_4e24_a2d9_c7dd7eef5d9b.slice/crio-1e50805f4f07646798c1339f11688835c3bb1fe857559cf2e9ac127fdcd33eb5 WatchSource:0}: Error finding container 1e50805f4f07646798c1339f11688835c3bb1fe857559cf2e9ac127fdcd33eb5: Status 404 returned error can't find the container with id 1e50805f4f07646798c1339f11688835c3bb1fe857559cf2e9ac127fdcd33eb5 Mar 12 13:12:15 crc kubenswrapper[4778]: I0312 13:12:15.020678 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" event={"ID":"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b","Type":"ContainerStarted","Data":"9b676c552f719540de88e78b618b86242e3d091cf2c5597d9b0b64928d91e299"} Mar 12 13:12:15 crc kubenswrapper[4778]: I0312 13:12:15.020733 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" event={"ID":"7d96bbda-3294-4e24-a2d9-c7dd7eef5d9b","Type":"ContainerStarted","Data":"1e50805f4f07646798c1339f11688835c3bb1fe857559cf2e9ac127fdcd33eb5"} Mar 12 13:12:15 crc kubenswrapper[4778]: I0312 13:12:15.252756 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:15 crc kubenswrapper[4778]: E0312 13:12:15.252865 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:15 crc kubenswrapper[4778]: I0312 13:12:15.253095 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:15 crc kubenswrapper[4778]: E0312 13:12:15.253703 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:16 crc kubenswrapper[4778]: I0312 13:12:16.253698 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:16 crc kubenswrapper[4778]: I0312 13:12:16.254363 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:12:16 crc kubenswrapper[4778]: E0312 13:12:16.254648 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bcc9_openshift-ovn-kubernetes(65cd795e-eb6e-4995-a4c1-9dea6f425ac5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" Mar 12 13:12:16 crc kubenswrapper[4778]: E0312 13:12:16.254680 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:16 crc kubenswrapper[4778]: I0312 13:12:16.255289 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:16 crc kubenswrapper[4778]: E0312 13:12:16.255646 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:17 crc kubenswrapper[4778]: I0312 13:12:17.253852 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:17 crc kubenswrapper[4778]: E0312 13:12:17.254173 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:17 crc kubenswrapper[4778]: I0312 13:12:17.254774 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:17 crc kubenswrapper[4778]: E0312 13:12:17.256250 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:17 crc kubenswrapper[4778]: E0312 13:12:17.381777 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:12:18 crc kubenswrapper[4778]: I0312 13:12:18.252861 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:18 crc kubenswrapper[4778]: I0312 13:12:18.252904 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:18 crc kubenswrapper[4778]: E0312 13:12:18.253105 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:18 crc kubenswrapper[4778]: E0312 13:12:18.253320 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:19 crc kubenswrapper[4778]: I0312 13:12:19.252852 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:19 crc kubenswrapper[4778]: I0312 13:12:19.252887 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:19 crc kubenswrapper[4778]: E0312 13:12:19.253034 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:19 crc kubenswrapper[4778]: E0312 13:12:19.253171 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:20 crc kubenswrapper[4778]: I0312 13:12:20.253382 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:20 crc kubenswrapper[4778]: E0312 13:12:20.253588 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:20 crc kubenswrapper[4778]: I0312 13:12:20.254029 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:20 crc kubenswrapper[4778]: E0312 13:12:20.254247 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:21 crc kubenswrapper[4778]: I0312 13:12:21.252759 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:21 crc kubenswrapper[4778]: E0312 13:12:21.252886 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:21 crc kubenswrapper[4778]: I0312 13:12:21.252978 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:21 crc kubenswrapper[4778]: E0312 13:12:21.253171 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:22 crc kubenswrapper[4778]: I0312 13:12:22.253037 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:22 crc kubenswrapper[4778]: I0312 13:12:22.256325 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:22 crc kubenswrapper[4778]: E0312 13:12:22.256315 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:22 crc kubenswrapper[4778]: E0312 13:12:22.256480 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:22 crc kubenswrapper[4778]: E0312 13:12:22.382207 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:12:23 crc kubenswrapper[4778]: I0312 13:12:23.253645 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:23 crc kubenswrapper[4778]: I0312 13:12:23.253666 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:23 crc kubenswrapper[4778]: E0312 13:12:23.254528 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:23 crc kubenswrapper[4778]: E0312 13:12:23.254612 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:24 crc kubenswrapper[4778]: I0312 13:12:24.253339 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:24 crc kubenswrapper[4778]: E0312 13:12:24.253471 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:24 crc kubenswrapper[4778]: I0312 13:12:24.253524 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:24 crc kubenswrapper[4778]: E0312 13:12:24.253666 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:25 crc kubenswrapper[4778]: I0312 13:12:25.253845 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:25 crc kubenswrapper[4778]: E0312 13:12:25.253983 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:25 crc kubenswrapper[4778]: I0312 13:12:25.253866 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:25 crc kubenswrapper[4778]: E0312 13:12:25.255022 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:26 crc kubenswrapper[4778]: I0312 13:12:26.253901 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:26 crc kubenswrapper[4778]: I0312 13:12:26.254634 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:26 crc kubenswrapper[4778]: E0312 13:12:26.254813 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:26 crc kubenswrapper[4778]: E0312 13:12:26.255005 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:27 crc kubenswrapper[4778]: I0312 13:12:27.253767 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:27 crc kubenswrapper[4778]: I0312 13:12:27.253805 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:27 crc kubenswrapper[4778]: E0312 13:12:27.255468 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:27 crc kubenswrapper[4778]: E0312 13:12:27.255596 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:27 crc kubenswrapper[4778]: E0312 13:12:27.384026 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:12:28 crc kubenswrapper[4778]: I0312 13:12:28.253390 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:28 crc kubenswrapper[4778]: I0312 13:12:28.253446 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:28 crc kubenswrapper[4778]: E0312 13:12:28.253614 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:28 crc kubenswrapper[4778]: E0312 13:12:28.253731 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:29 crc kubenswrapper[4778]: I0312 13:12:29.253638 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:29 crc kubenswrapper[4778]: I0312 13:12:29.253702 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:29 crc kubenswrapper[4778]: E0312 13:12:29.253760 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:29 crc kubenswrapper[4778]: E0312 13:12:29.253906 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:30 crc kubenswrapper[4778]: I0312 13:12:30.253581 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:30 crc kubenswrapper[4778]: I0312 13:12:30.253715 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:30 crc kubenswrapper[4778]: E0312 13:12:30.255268 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:30 crc kubenswrapper[4778]: E0312 13:12:30.255393 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:30 crc kubenswrapper[4778]: I0312 13:12:30.255760 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.110020 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/3.log" Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.113610 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerStarted","Data":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.114091 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.139441 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g7sbv" podStartSLOduration=120.13942483 podStartE2EDuration="2m0.13942483s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:15.050499707 +0000 UTC m=+153.499195163" watchObservedRunningTime="2026-03-12 13:12:31.13942483 +0000 UTC m=+169.588120226" Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.253682 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.253762 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:31 crc kubenswrapper[4778]: E0312 13:12:31.253864 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:31 crc kubenswrapper[4778]: E0312 13:12:31.253952 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.336229 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podStartSLOduration=120.336194089 podStartE2EDuration="2m0.336194089s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:31.144558564 +0000 UTC m=+169.593253960" watchObservedRunningTime="2026-03-12 13:12:31.336194089 +0000 UTC m=+169.784889505" Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.336964 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rz9vw"] Mar 12 13:12:31 crc kubenswrapper[4778]: I0312 13:12:31.337069 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:31 crc kubenswrapper[4778]: E0312 13:12:31.337172 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:32 crc kubenswrapper[4778]: I0312 13:12:32.254590 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:32 crc kubenswrapper[4778]: E0312 13:12:32.256453 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:32 crc kubenswrapper[4778]: E0312 13:12:32.384619 4778 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 12 13:12:33 crc kubenswrapper[4778]: I0312 13:12:33.253773 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:33 crc kubenswrapper[4778]: I0312 13:12:33.253838 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:33 crc kubenswrapper[4778]: I0312 13:12:33.253788 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:33 crc kubenswrapper[4778]: E0312 13:12:33.253994 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:33 crc kubenswrapper[4778]: E0312 13:12:33.254102 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:33 crc kubenswrapper[4778]: E0312 13:12:33.254231 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:34 crc kubenswrapper[4778]: I0312 13:12:34.253485 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:34 crc kubenswrapper[4778]: E0312 13:12:34.253685 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:35 crc kubenswrapper[4778]: I0312 13:12:35.254050 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:35 crc kubenswrapper[4778]: I0312 13:12:35.254172 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:35 crc kubenswrapper[4778]: I0312 13:12:35.254229 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:35 crc kubenswrapper[4778]: E0312 13:12:35.254303 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:35 crc kubenswrapper[4778]: E0312 13:12:35.254537 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:35 crc kubenswrapper[4778]: E0312 13:12:35.254442 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:36 crc kubenswrapper[4778]: I0312 13:12:36.253329 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:36 crc kubenswrapper[4778]: E0312 13:12:36.253516 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 12 13:12:37 crc kubenswrapper[4778]: I0312 13:12:37.253581 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:37 crc kubenswrapper[4778]: I0312 13:12:37.253581 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:37 crc kubenswrapper[4778]: E0312 13:12:37.254586 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rz9vw" podUID="0b59b25a-3acc-4d06-b91d-575f45463520" Mar 12 13:12:37 crc kubenswrapper[4778]: I0312 13:12:37.253598 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:37 crc kubenswrapper[4778]: E0312 13:12:37.254688 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 12 13:12:37 crc kubenswrapper[4778]: E0312 13:12:37.254843 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 12 13:12:38 crc kubenswrapper[4778]: I0312 13:12:38.253296 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:12:38 crc kubenswrapper[4778]: I0312 13:12:38.255707 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 13:12:38 crc kubenswrapper[4778]: I0312 13:12:38.256036 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 13:12:39 crc kubenswrapper[4778]: I0312 13:12:39.252915 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:12:39 crc kubenswrapper[4778]: I0312 13:12:39.252940 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:12:39 crc kubenswrapper[4778]: I0312 13:12:39.252940 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:12:39 crc kubenswrapper[4778]: I0312 13:12:39.255007 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 13:12:39 crc kubenswrapper[4778]: I0312 13:12:39.255317 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 13:12:39 crc kubenswrapper[4778]: I0312 13:12:39.255537 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 13:12:39 crc kubenswrapper[4778]: I0312 13:12:39.256995 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.495524 4778 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.545457 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.546324 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.553051 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xz42x"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.554173 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pgrb5"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.554719 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.555021 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.555357 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.555434 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.555469 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8dkpx"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.555611 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.555666 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.555903 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.556086 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.571079 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.571581 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.571791 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.572456 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.572657 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.572890 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573059 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573325 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573445 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573491 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kw4v"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573552 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573597 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573708 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573758 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573792 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573892 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.573930 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.574077 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.574131 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.574300 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.574318 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.574465 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.574757 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.580084 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.582891 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.585139 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.590389 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.603104 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.616502 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.616839 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.617291 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.617560 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.617788 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.616890 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.616949 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.618277 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.618435 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.618573 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.618595 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.618847 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.618933 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-242cb"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.619299 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.619903 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.620677 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.621133 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.621283 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.621359 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.621448 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.621721 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.631075 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.631430 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.632074 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.632617 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.635926 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.636098 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.636500 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.636742 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.636998 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.637220 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.640067 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.644940 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.645498 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.645750 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.646005 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.646144 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.646223 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.646382 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.646404 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.646467 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.646999 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mx6kn"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647014 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647145 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647295 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647406 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mx6kn" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647658 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647715 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647769 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647872 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.647932 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.648282 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.655932 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pgrb5"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.656971 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.678605 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.680251 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.680388 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.681727 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.682314 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.710070 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.710100 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.710143 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.710262 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.710332 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.710773 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.711160 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.711530 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.711810 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.712161 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.712324 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.713827 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.715096 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.715671 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.715734 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ms5xq"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.716268 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.716768 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.717075 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.721069 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.721328 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.721554 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.721657 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.721662 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.721613 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.721822 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.721909 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.722055 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.722207 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.722331 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.722429 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.722514 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.722683 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.722925 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.723031 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.723179 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.729001 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.729575 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.730874 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kw4v"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.731259 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.731398 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.731791 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.732019 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.732402 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-images\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.732528 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de34cf46-4b6a-4f7a-8225-eb77bec57450-node-pullsecrets\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.732625 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t5m5\" (UniqueName: \"kubernetes.io/projected/53a87d9e-095f-4669-b121-0b2c88e5fabb-kube-api-access-2t5m5\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.732712 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56zt\" (UniqueName: \"kubernetes.io/projected/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-kube-api-access-k56zt\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.732820 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.733626 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-serving-cert\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.733778 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.733873 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.733961 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-serving-cert\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.734041 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.734123 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-etcd-serving-ca\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.734412 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-client-ca\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.734505 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7xw2\" (UniqueName: \"kubernetes.io/projected/f56ab022-7fcd-406c-b308-b8d5f93a8b55-kube-api-access-b7xw2\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.734593 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.734726 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36ec67c-df24-46ce-94b9-10619822c15a-audit-dir\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.734797 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrlvl\" (UniqueName: \"kubernetes.io/projected/f36ec67c-df24-46ce-94b9-10619822c15a-kube-api-access-xrlvl\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.732546 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xwwxp"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.732596 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.734973 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.735833 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-config\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.735854 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-config\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.735891 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56ab022-7fcd-406c-b308-b8d5f93a8b55-serving-cert\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.735910 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-etcd-client\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.735926 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.735960 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-machine-approver-tls\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.735978 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-client-ca\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.735998 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736014 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-config\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736050 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736066 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-encryption-config\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736081 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-audit\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736095 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-service-ca-bundle\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736110 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736128 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgc8\" (UniqueName: \"kubernetes.io/projected/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-kube-api-access-njgc8\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736150 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bl9\" (UniqueName: \"kubernetes.io/projected/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-kube-api-access-c9bl9\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736167 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7cpc\" (UniqueName: \"kubernetes.io/projected/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-kube-api-access-p7cpc\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736212 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736233 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736250 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53a87d9e-095f-4669-b121-0b2c88e5fabb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736298 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736322 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-image-import-ca\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736337 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-auth-proxy-config\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736384 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736402 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-config\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736418 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhjj\" (UniqueName: \"kubernetes.io/projected/de34cf46-4b6a-4f7a-8225-eb77bec57450-kube-api-access-rhhjj\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736454 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736474 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9b292b2-1928-45d2-ad7f-8d510ebaa771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tplzm\" (UID: \"e9b292b2-1928-45d2-ad7f-8d510ebaa771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736493 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-serving-cert\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736528 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f56ab022-7fcd-406c-b308-b8d5f93a8b55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736548 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736565 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de34cf46-4b6a-4f7a-8225-eb77bec57450-audit-dir\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736570 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736580 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-config\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736619 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smq82\" (UniqueName: \"kubernetes.io/projected/e9b292b2-1928-45d2-ad7f-8d510ebaa771-kube-api-access-smq82\") pod \"cluster-samples-operator-665b6dd947-tplzm\" (UID: \"e9b292b2-1928-45d2-ad7f-8d510ebaa771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736635 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-audit-policies\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736650 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-config\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736700 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a87d9e-095f-4669-b121-0b2c88e5fabb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.736716 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkkl7\" (UniqueName: \"kubernetes.io/projected/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-kube-api-access-pkkl7\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.737014 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ww8lt"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.737067 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.737282 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.737447 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.737642 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.737763 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.737926 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.738406 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.739631 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wqm5"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.739966 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.740177 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.740309 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.743949 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.744461 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.745844 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-98lbj"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.746395 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.746712 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.747497 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.748552 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.749933 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpp8t"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.750311 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.751038 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.752359 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.753763 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xz42x"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.755276 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.755915 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.756441 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.756468 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.756627 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxrx4"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.756985 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.757524 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2z5gg"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.758405 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.758636 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bdcvl"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.759368 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.759697 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.760199 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.760752 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.761317 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.761854 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.763438 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qf4nv"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.763565 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.764924 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555352-q7fvr"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.765069 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.765375 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.770631 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.771611 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pg48j"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.772938 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.781917 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.785003 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wqm5"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.788480 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xwwxp"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.789292 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.790056 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.791111 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8dkpx"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.792105 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ww8lt"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.793139 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.794145 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.795204 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-242cb"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.796224 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.797236 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.798263 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpp8t"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.799303 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2z5gg"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.800606 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mx6kn"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.801706 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-98lbj"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.803598 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8zmxq"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.804510 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.804564 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-d562t"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.805235 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d562t" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.805568 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.806837 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.807862 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8zmxq"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.808864 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.809422 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.809955 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bdcvl"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.811246 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.812135 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.813217 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d562t"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.814494 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxrx4"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.815804 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pg48j"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.817235 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.818289 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.819507 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.820602 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.822089 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.823444 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.824630 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555352-q7fvr"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.827057 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cp2lw"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.828118 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.828328 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cp2lw"] Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.828463 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838233 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bl9\" (UniqueName: \"kubernetes.io/projected/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-kube-api-access-c9bl9\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838277 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7cpc\" (UniqueName: \"kubernetes.io/projected/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-kube-api-access-p7cpc\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838306 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vx9\" (UniqueName: \"kubernetes.io/projected/84bb574a-c91e-4720-83c6-6c47c9344ad2-kube-api-access-98vx9\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838328 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838347 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqbjv\" (UniqueName: \"kubernetes.io/projected/c825022c-79bc-44ae-bc64-ee9614aafe25-kube-api-access-rqbjv\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838372 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838400 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838434 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53a87d9e-095f-4669-b121-0b2c88e5fabb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838462 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78cc82c7-719e-43ad-926f-a387e0845219-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838503 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838527 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-audit-policies\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838565 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-image-import-ca\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838590 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-auth-proxy-config\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838624 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838648 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-serving-cert\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838675 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-service-ca\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838702 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-config\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838726 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhjj\" (UniqueName: \"kubernetes.io/projected/de34cf46-4b6a-4f7a-8225-eb77bec57450-kube-api-access-rhhjj\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838747 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838771 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838793 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78cc82c7-719e-43ad-926f-a387e0845219-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838822 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9b292b2-1928-45d2-ad7f-8d510ebaa771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tplzm\" (UID: \"e9b292b2-1928-45d2-ad7f-8d510ebaa771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838852 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-serving-cert\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838878 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f56ab022-7fcd-406c-b308-b8d5f93a8b55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838902 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838928 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-console-config\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838949 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84bb574a-c91e-4720-83c6-6c47c9344ad2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838968 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk6h6\" (UniqueName: \"kubernetes.io/projected/5bb00a46-7425-4d14-a10c-779a5036bba6-kube-api-access-rk6h6\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.838985 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78cc82c7-719e-43ad-926f-a387e0845219-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839005 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de34cf46-4b6a-4f7a-8225-eb77bec57450-audit-dir\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839025 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-stats-auth\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839045 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22194d8c-315e-46b9-a23b-daab9d020ce4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839063 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57a417a-5175-4210-98a0-69e579c22e14-config\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839083 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-config\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839114 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c320d1aa-c376-41f2-ac5a-8432120b68e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kc7s7\" (UID: \"c320d1aa-c376-41f2-ac5a-8432120b68e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839133 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9j4\" (UniqueName: \"kubernetes.io/projected/8af48f77-25f7-49ca-8bcb-2481aa72ee66-kube-api-access-2b9j4\") pod \"downloads-7954f5f757-mx6kn\" (UID: \"8af48f77-25f7-49ca-8bcb-2481aa72ee66\") " pod="openshift-console/downloads-7954f5f757-mx6kn" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839152 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22194d8c-315e-46b9-a23b-daab9d020ce4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839169 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839215 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smq82\" (UniqueName: \"kubernetes.io/projected/e9b292b2-1928-45d2-ad7f-8d510ebaa771-kube-api-access-smq82\") pod \"cluster-samples-operator-665b6dd947-tplzm\" (UID: \"e9b292b2-1928-45d2-ad7f-8d510ebaa771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839236 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-audit-policies\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839258 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57a417a-5175-4210-98a0-69e579c22e14-serving-cert\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839278 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-config\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839296 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839315 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f57a417a-5175-4210-98a0-69e579c22e14-trusted-ca\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839342 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-proxy-tls\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839360 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfpd8\" (UniqueName: \"kubernetes.io/projected/c320d1aa-c376-41f2-ac5a-8432120b68e0-kube-api-access-lfpd8\") pod \"package-server-manager-789f6589d5-kc7s7\" (UID: \"c320d1aa-c376-41f2-ac5a-8432120b68e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839386 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a87d9e-095f-4669-b121-0b2c88e5fabb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839405 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkkl7\" (UniqueName: \"kubernetes.io/projected/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-kube-api-access-pkkl7\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839411 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839423 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx5wn\" (UniqueName: \"kubernetes.io/projected/5c8d947a-b62b-4eb9-81d7-94530285e8dc-kube-api-access-bx5wn\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839472 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de34cf46-4b6a-4f7a-8225-eb77bec57450-audit-dir\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839511 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84bb574a-c91e-4720-83c6-6c47c9344ad2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839535 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-image-import-ca\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839584 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839614 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-images\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839684 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de34cf46-4b6a-4f7a-8225-eb77bec57450-node-pullsecrets\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839703 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t5m5\" (UniqueName: \"kubernetes.io/projected/53a87d9e-095f-4669-b121-0b2c88e5fabb-kube-api-access-2t5m5\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.839726 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5gts\" (UniqueName: \"kubernetes.io/projected/78cc82c7-719e-43ad-926f-a387e0845219-kube-api-access-n5gts\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840288 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-auth-proxy-config\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840459 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53a87d9e-095f-4669-b121-0b2c88e5fabb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840606 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-config\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840745 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-config\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840762 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840789 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de34cf46-4b6a-4f7a-8225-eb77bec57450-node-pullsecrets\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840854 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmb4r\" (UniqueName: \"kubernetes.io/projected/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-kube-api-access-pmb4r\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840897 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56zt\" (UniqueName: \"kubernetes.io/projected/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-kube-api-access-k56zt\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.840952 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.841020 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-images\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.841096 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-audit-policies\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.841352 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-serving-cert\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.841441 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.841605 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-config\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.841654 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f56ab022-7fcd-406c-b308-b8d5f93a8b55-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.841805 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.841944 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-oauth-config\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842038 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c8d947a-b62b-4eb9-81d7-94530285e8dc-service-ca-bundle\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842143 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842244 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-serving-cert\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842378 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac3e8bc-e165-45d4-8c32-1ccda9769857-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842473 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-metrics-certs\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842553 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842635 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-etcd-client\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842718 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842804 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-etcd-serving-ca\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842877 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-client-ca\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.842959 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7xw2\" (UniqueName: \"kubernetes.io/projected/f56ab022-7fcd-406c-b308-b8d5f93a8b55-kube-api-access-b7xw2\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843034 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843145 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vx5\" (UniqueName: \"kubernetes.io/projected/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-kube-api-access-t9vx5\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843280 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843360 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-srv-cert\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843422 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-etcd-serving-ca\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843510 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22194d8c-315e-46b9-a23b-daab9d020ce4-config\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843594 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrcs\" (UniqueName: \"kubernetes.io/projected/f57a417a-5175-4210-98a0-69e579c22e14-kube-api-access-hsrcs\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843700 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36ec67c-df24-46ce-94b9-10619822c15a-audit-dir\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843766 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-client-ca\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843778 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrlvl\" (UniqueName: \"kubernetes.io/projected/f36ec67c-df24-46ce-94b9-10619822c15a-kube-api-access-xrlvl\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844145 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844179 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-config\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844231 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-serving-cert\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844255 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-trusted-ca-bundle\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844281 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d33ee6-3a31-4464-b401-7469bf04d240-config\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844285 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844311 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-config\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844337 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d33ee6-3a31-4464-b401-7469bf04d240-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844368 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56ab022-7fcd-406c-b308-b8d5f93a8b55-serving-cert\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844393 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-images\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844421 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-etcd-client\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844445 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844469 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-machine-approver-tls\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844496 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-client-ca\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844523 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-default-certificate\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844553 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844581 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-config\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844607 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-encryption-config\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844634 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cl6l\" (UniqueName: \"kubernetes.io/projected/f799c7e9-1c31-40bc-9ece-06a086683a98-kube-api-access-6cl6l\") pod \"control-plane-machine-set-operator-78cbb6b69f-zkrqr\" (UID: \"f799c7e9-1c31-40bc-9ece-06a086683a98\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844668 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844695 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac3e8bc-e165-45d4-8c32-1ccda9769857-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844722 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-encryption-config\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844749 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac3e8bc-e165-45d4-8c32-1ccda9769857-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844773 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-oauth-serving-cert\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844807 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-audit\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844832 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f799c7e9-1c31-40bc-9ece-06a086683a98-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zkrqr\" (UID: \"f799c7e9-1c31-40bc-9ece-06a086683a98\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844860 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d33ee6-3a31-4464-b401-7469bf04d240-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844883 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-service-ca-bundle\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844910 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844936 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgc8\" (UniqueName: \"kubernetes.io/projected/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-kube-api-access-njgc8\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844962 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bb00a46-7425-4d14-a10c-779a5036bba6-audit-dir\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.844992 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5rtt\" (UniqueName: \"kubernetes.io/projected/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-kube-api-access-z5rtt\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.843990 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36ec67c-df24-46ce-94b9-10619822c15a-audit-dir\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.845434 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53a87d9e-095f-4669-b121-0b2c88e5fabb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.846035 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.846288 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-config\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.846342 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.846419 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.846604 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.846720 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-config\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.846897 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-service-ca-bundle\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.847394 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-config\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.847658 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/de34cf46-4b6a-4f7a-8225-eb77bec57450-audit\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.847813 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9b292b2-1928-45d2-ad7f-8d510ebaa771-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tplzm\" (UID: \"e9b292b2-1928-45d2-ad7f-8d510ebaa771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.848050 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-serving-cert\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.848101 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.848234 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-client-ca\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.848429 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.848617 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56ab022-7fcd-406c-b308-b8d5f93a8b55-serving-cert\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.849396 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.849399 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-serving-cert\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.849807 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.850239 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-encryption-config\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.850600 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-etcd-client\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.851043 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-machine-approver-tls\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.851944 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.854719 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-serving-cert\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.855850 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.863060 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de34cf46-4b6a-4f7a-8225-eb77bec57450-serving-cert\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.869079 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.888996 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.909318 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.929468 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.945581 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmb4r\" (UniqueName: \"kubernetes.io/projected/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-kube-api-access-pmb4r\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.945640 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/486a990d-7a56-4eea-a44d-d05a412718c2-metrics-tls\") pod \"dns-operator-744455d44c-2z5gg\" (UID: \"486a990d-7a56-4eea-a44d-d05a412718c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.945668 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-csi-data-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.945693 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac3e8bc-e165-45d4-8c32-1ccda9769857-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.945716 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-metrics-certs\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.945740 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.946682 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.946778 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22194d8c-315e-46b9-a23b-daab9d020ce4-config\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.946810 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrcs\" (UniqueName: \"kubernetes.io/projected/f57a417a-5175-4210-98a0-69e579c22e14-kube-api-access-hsrcs\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.946848 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwz6k\" (UniqueName: \"kubernetes.io/projected/12abcb2c-895a-46af-9c26-66e358259ce9-kube-api-access-nwz6k\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.946889 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-trusted-ca-bundle\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.946916 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d33ee6-3a31-4464-b401-7469bf04d240-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.946950 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-profile-collector-cert\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.946979 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-images\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947010 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-encryption-config\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947038 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cl6l\" (UniqueName: \"kubernetes.io/projected/f799c7e9-1c31-40bc-9ece-06a086683a98-kube-api-access-6cl6l\") pod \"control-plane-machine-set-operator-78cbb6b69f-zkrqr\" (UID: \"f799c7e9-1c31-40bc-9ece-06a086683a98\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947070 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac3e8bc-e165-45d4-8c32-1ccda9769857-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947333 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d33ee6-3a31-4464-b401-7469bf04d240-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947364 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af58c501-1c93-4f7a-bdf9-1255879aea5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947393 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bb00a46-7425-4d14-a10c-779a5036bba6-audit-dir\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947428 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqbjv\" (UniqueName: \"kubernetes.io/projected/c825022c-79bc-44ae-bc64-ee9614aafe25-kube-api-access-rqbjv\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947472 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-audit-policies\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947502 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78cc82c7-719e-43ad-926f-a387e0845219-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947540 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-serving-cert\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947566 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-service-ca\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947605 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947635 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78cc82c7-719e-43ad-926f-a387e0845219-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947662 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84bb574a-c91e-4720-83c6-6c47c9344ad2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947687 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk6h6\" (UniqueName: \"kubernetes.io/projected/5bb00a46-7425-4d14-a10c-779a5036bba6-kube-api-access-rk6h6\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947712 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78cc82c7-719e-43ad-926f-a387e0845219-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947739 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57a417a-5175-4210-98a0-69e579c22e14-config\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947771 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6h7h\" (UniqueName: \"kubernetes.io/projected/9f210efd-2ac0-4b67-89c5-fcd9f52f6e01-kube-api-access-b6h7h\") pod \"auto-csr-approver-29555352-q7fvr\" (UID: \"9f210efd-2ac0-4b67-89c5-fcd9f52f6e01\") " pod="openshift-infra/auto-csr-approver-29555352-q7fvr" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947808 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22194d8c-315e-46b9-a23b-daab9d020ce4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947832 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57a417a-5175-4210-98a0-69e579c22e14-serving-cert\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947857 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f57a417a-5175-4210-98a0-69e579c22e14-trusted-ca\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947883 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a240fd7b-5854-4548-a847-e5590111964b-secret-volume\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947924 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-proxy-tls\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947950 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfpd8\" (UniqueName: \"kubernetes.io/projected/c320d1aa-c376-41f2-ac5a-8432120b68e0-kube-api-access-lfpd8\") pod \"package-server-manager-789f6589d5-kc7s7\" (UID: \"c320d1aa-c376-41f2-ac5a-8432120b68e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.947978 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88x7r\" (UniqueName: \"kubernetes.io/projected/a240fd7b-5854-4548-a847-e5590111964b-kube-api-access-88x7r\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948017 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5gts\" (UniqueName: \"kubernetes.io/projected/78cc82c7-719e-43ad-926f-a387e0845219-kube-api-access-n5gts\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948042 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22194d8c-315e-46b9-a23b-daab9d020ce4-config\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948075 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlpsj\" (UniqueName: \"kubernetes.io/projected/30ff941c-3c4b-4229-af5a-78bb244a385b-kube-api-access-hlpsj\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948126 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grsg\" (UniqueName: \"kubernetes.io/projected/f1f25dae-f3e4-481d-8451-4851b60b2ec4-kube-api-access-9grsg\") pod \"multus-admission-controller-857f4d67dd-98lbj\" (UID: \"f1f25dae-f3e4-481d-8451-4851b60b2ec4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948561 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c8d947a-b62b-4eb9-81d7-94530285e8dc-service-ca-bundle\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948570 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5bb00a46-7425-4d14-a10c-779a5036bba6-audit-dir\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948796 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948831 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-oauth-config\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948859 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-etcd-client\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948878 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9vx5\" (UniqueName: \"kubernetes.io/projected/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-kube-api-access-t9vx5\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948900 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-srv-cert\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948920 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-plugins-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948960 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-serving-cert\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948978 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d33ee6-3a31-4464-b401-7469bf04d240-config\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.948996 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9k6q\" (UniqueName: \"kubernetes.io/projected/32bf6158-393f-4423-9255-345581ec5bf1-kube-api-access-p9k6q\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949020 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af58c501-1c93-4f7a-bdf9-1255879aea5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949042 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-default-certificate\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949063 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-registration-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949086 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac3e8bc-e165-45d4-8c32-1ccda9769857-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949102 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-srv-cert\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949119 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfg2m\" (UniqueName: \"kubernetes.io/projected/486a990d-7a56-4eea-a44d-d05a412718c2-kube-api-access-nfg2m\") pod \"dns-operator-744455d44c-2z5gg\" (UID: \"486a990d-7a56-4eea-a44d-d05a412718c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949141 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-oauth-serving-cert\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949160 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f799c7e9-1c31-40bc-9ece-06a086683a98-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zkrqr\" (UID: \"f799c7e9-1c31-40bc-9ece-06a086683a98\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949180 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bf6158-393f-4423-9255-345581ec5bf1-serving-cert\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949224 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5rtt\" (UniqueName: \"kubernetes.io/projected/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-kube-api-access-z5rtt\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949243 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30ff941c-3c4b-4229-af5a-78bb244a385b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949259 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bf6158-393f-4423-9255-345581ec5bf1-config\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949296 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98vx9\" (UniqueName: \"kubernetes.io/projected/84bb574a-c91e-4720-83c6-6c47c9344ad2-kube-api-access-98vx9\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949315 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30ff941c-3c4b-4229-af5a-78bb244a385b-proxy-tls\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949333 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6t2k\" (UniqueName: \"kubernetes.io/projected/af58c501-1c93-4f7a-bdf9-1255879aea5a-kube-api-access-n6t2k\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949364 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949404 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58p6m\" (UniqueName: \"kubernetes.io/projected/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-kube-api-access-58p6m\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949432 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949453 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-console-config\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949452 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949472 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-socket-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949491 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-stats-auth\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949506 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22194d8c-315e-46b9-a23b-daab9d020ce4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949526 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c320d1aa-c376-41f2-ac5a-8432120b68e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kc7s7\" (UID: \"c320d1aa-c376-41f2-ac5a-8432120b68e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949525 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-audit-policies\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949543 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-mountpoint-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949623 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b9j4\" (UniqueName: \"kubernetes.io/projected/8af48f77-25f7-49ca-8bcb-2481aa72ee66-kube-api-access-2b9j4\") pod \"downloads-7954f5f757-mx6kn\" (UID: \"8af48f77-25f7-49ca-8bcb-2481aa72ee66\") " pod="openshift-console/downloads-7954f5f757-mx6kn" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949662 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949721 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949758 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1f25dae-f3e4-481d-8451-4851b60b2ec4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-98lbj\" (UID: \"f1f25dae-f3e4-481d-8451-4851b60b2ec4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949807 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnkc\" (UniqueName: \"kubernetes.io/projected/a875bbd5-0126-4d1c-8b7e-97ac32863981-kube-api-access-4dnkc\") pod \"migrator-59844c95c7-r2r62\" (UID: \"a875bbd5-0126-4d1c-8b7e-97ac32863981\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949844 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84bb574a-c91e-4720-83c6-6c47c9344ad2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949972 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.949898 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx5wn\" (UniqueName: \"kubernetes.io/projected/5c8d947a-b62b-4eb9-81d7-94530285e8dc-kube-api-access-bx5wn\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.950054 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c8d947a-b62b-4eb9-81d7-94530285e8dc-service-ca-bundle\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.950165 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-oauth-serving-cert\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.951230 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5bb00a46-7425-4d14-a10c-779a5036bba6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.951679 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-metrics-certs\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.952307 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0d33ee6-3a31-4464-b401-7469bf04d240-config\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.952468 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-oauth-config\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.952506 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-serving-cert\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.952514 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84bb574a-c91e-4720-83c6-6c47c9344ad2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.953017 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-console-config\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.953666 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0d33ee6-3a31-4464-b401-7469bf04d240-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.953976 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-encryption-config\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.954158 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.954846 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-stats-auth\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.955265 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-srv-cert\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.955633 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-serving-cert\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.955949 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5bb00a46-7425-4d14-a10c-779a5036bba6-etcd-client\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.956421 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c8d947a-b62b-4eb9-81d7-94530285e8dc-default-certificate\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.959748 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84bb574a-c91e-4720-83c6-6c47c9344ad2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.963507 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22194d8c-315e-46b9-a23b-daab9d020ce4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.976387 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.979335 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-trusted-ca-bundle\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.989243 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 13:12:44 crc kubenswrapper[4778]: I0312 13:12:44.999208 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-service-ca\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.009237 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.023111 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-proxy-tls\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.029991 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.048895 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.050855 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwz6k\" (UniqueName: \"kubernetes.io/projected/12abcb2c-895a-46af-9c26-66e358259ce9-kube-api-access-nwz6k\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.050935 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-profile-collector-cert\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.050991 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af58c501-1c93-4f7a-bdf9-1255879aea5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051122 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051262 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6h7h\" (UniqueName: \"kubernetes.io/projected/9f210efd-2ac0-4b67-89c5-fcd9f52f6e01-kube-api-access-b6h7h\") pod \"auto-csr-approver-29555352-q7fvr\" (UID: \"9f210efd-2ac0-4b67-89c5-fcd9f52f6e01\") " pod="openshift-infra/auto-csr-approver-29555352-q7fvr" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051351 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a240fd7b-5854-4548-a847-e5590111964b-secret-volume\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051451 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88x7r\" (UniqueName: \"kubernetes.io/projected/a240fd7b-5854-4548-a847-e5590111964b-kube-api-access-88x7r\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051521 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlpsj\" (UniqueName: \"kubernetes.io/projected/30ff941c-3c4b-4229-af5a-78bb244a385b-kube-api-access-hlpsj\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051553 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grsg\" (UniqueName: \"kubernetes.io/projected/f1f25dae-f3e4-481d-8451-4851b60b2ec4-kube-api-access-9grsg\") pod \"multus-admission-controller-857f4d67dd-98lbj\" (UID: \"f1f25dae-f3e4-481d-8451-4851b60b2ec4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051618 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-plugins-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051651 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af58c501-1c93-4f7a-bdf9-1255879aea5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051677 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9k6q\" (UniqueName: \"kubernetes.io/projected/32bf6158-393f-4423-9255-345581ec5bf1-kube-api-access-p9k6q\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051702 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-registration-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051730 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-srv-cert\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051763 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfg2m\" (UniqueName: \"kubernetes.io/projected/486a990d-7a56-4eea-a44d-d05a412718c2-kube-api-access-nfg2m\") pod \"dns-operator-744455d44c-2z5gg\" (UID: \"486a990d-7a56-4eea-a44d-d05a412718c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051794 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bf6158-393f-4423-9255-345581ec5bf1-serving-cert\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051826 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30ff941c-3c4b-4229-af5a-78bb244a385b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051871 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30ff941c-3c4b-4229-af5a-78bb244a385b-proxy-tls\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051891 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-plugins-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051899 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bf6158-393f-4423-9255-345581ec5bf1-config\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051959 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-registration-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.051980 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6t2k\" (UniqueName: \"kubernetes.io/projected/af58c501-1c93-4f7a-bdf9-1255879aea5a-kube-api-access-n6t2k\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052050 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58p6m\" (UniqueName: \"kubernetes.io/projected/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-kube-api-access-58p6m\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052075 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-socket-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052124 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-mountpoint-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052199 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1f25dae-f3e4-481d-8451-4851b60b2ec4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-98lbj\" (UID: \"f1f25dae-f3e4-481d-8451-4851b60b2ec4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052226 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dnkc\" (UniqueName: \"kubernetes.io/projected/a875bbd5-0126-4d1c-8b7e-97ac32863981-kube-api-access-4dnkc\") pod \"migrator-59844c95c7-r2r62\" (UID: \"a875bbd5-0126-4d1c-8b7e-97ac32863981\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052243 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-mountpoint-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052241 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-socket-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052287 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/486a990d-7a56-4eea-a44d-d05a412718c2-metrics-tls\") pod \"dns-operator-744455d44c-2z5gg\" (UID: \"486a990d-7a56-4eea-a44d-d05a412718c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052309 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-csi-data-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052476 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-csi-data-dir\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.052981 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/30ff941c-3c4b-4229-af5a-78bb244a385b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.055497 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-profile-collector-cert\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.056299 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a240fd7b-5854-4548-a847-e5590111964b-secret-volume\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.069959 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.089207 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.109296 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.122535 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57a417a-5175-4210-98a0-69e579c22e14-serving-cert\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.129735 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.140096 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57a417a-5175-4210-98a0-69e579c22e14-config\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.160238 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.169269 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.171342 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f57a417a-5175-4210-98a0-69e579c22e14-trusted-ca\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.179105 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-images\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.190138 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.209016 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.223509 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78cc82c7-719e-43ad-926f-a387e0845219-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.241141 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.249664 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.249726 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78cc82c7-719e-43ad-926f-a387e0845219-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.253530 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f799c7e9-1c31-40bc-9ece-06a086683a98-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zkrqr\" (UID: \"f799c7e9-1c31-40bc-9ece-06a086683a98\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.269370 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.298400 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.302595 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.309572 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.330470 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.335874 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.354631 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.370155 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.378561 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac3e8bc-e165-45d4-8c32-1ccda9769857-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.390497 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.409934 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.417682 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac3e8bc-e165-45d4-8c32-1ccda9769857-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.429663 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.450306 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.469434 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.478771 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c320d1aa-c376-41f2-ac5a-8432120b68e0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kc7s7\" (UID: \"c320d1aa-c376-41f2-ac5a-8432120b68e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.489711 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.496163 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1f25dae-f3e4-481d-8451-4851b60b2ec4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-98lbj\" (UID: \"f1f25dae-f3e4-481d-8451-4851b60b2ec4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.509457 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.529735 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.549254 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.569139 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.588914 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.609743 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.630450 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.650461 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.670175 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.676983 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af58c501-1c93-4f7a-bdf9-1255879aea5a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.688731 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.693927 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af58c501-1c93-4f7a-bdf9-1255879aea5a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.708852 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.729689 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.749824 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.756519 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/30ff941c-3c4b-4229-af5a-78bb244a385b-proxy-tls\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.769986 4778 request.go:700] Waited for 1.010967119s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.772466 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.789638 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.809610 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.829236 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.849785 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.870413 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.889642 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.909620 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.929295 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.937976 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/486a990d-7a56-4eea-a44d-d05a412718c2-metrics-tls\") pod \"dns-operator-744455d44c-2z5gg\" (UID: \"486a990d-7a56-4eea-a44d-d05a412718c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.949093 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.970368 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 13:12:45 crc kubenswrapper[4778]: I0312 13:12:45.989256 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.008906 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.029386 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.049683 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 13:12:46 crc kubenswrapper[4778]: E0312 13:12:46.052499 4778 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 13:12:46 crc kubenswrapper[4778]: E0312 13:12:46.052509 4778 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Mar 12 13:12:46 crc kubenswrapper[4778]: E0312 13:12:46.052569 4778 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 13:12:46 crc kubenswrapper[4778]: E0312 13:12:46.052617 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/32bf6158-393f-4423-9255-345581ec5bf1-config podName:32bf6158-393f-4423-9255-345581ec5bf1 nodeName:}" failed. No retries permitted until 2026-03-12 13:12:46.552577335 +0000 UTC m=+185.001272771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/32bf6158-393f-4423-9255-345581ec5bf1-config") pod "service-ca-operator-777779d784-pg48j" (UID: "32bf6158-393f-4423-9255-345581ec5bf1") : failed to sync configmap cache: timed out waiting for the condition Mar 12 13:12:46 crc kubenswrapper[4778]: E0312 13:12:46.052530 4778 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 12 13:12:46 crc kubenswrapper[4778]: E0312 13:12:46.052689 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume podName:a240fd7b-5854-4548-a847-e5590111964b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:46.552658938 +0000 UTC m=+185.001354364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume") pod "collect-profiles-29555340-7tvjm" (UID: "a240fd7b-5854-4548-a847-e5590111964b") : failed to sync configmap cache: timed out waiting for the condition Mar 12 13:12:46 crc kubenswrapper[4778]: E0312 13:12:46.052753 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32bf6158-393f-4423-9255-345581ec5bf1-serving-cert podName:32bf6158-393f-4423-9255-345581ec5bf1 nodeName:}" failed. No retries permitted until 2026-03-12 13:12:46.552707169 +0000 UTC m=+185.001402705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/32bf6158-393f-4423-9255-345581ec5bf1-serving-cert") pod "service-ca-operator-777779d784-pg48j" (UID: "32bf6158-393f-4423-9255-345581ec5bf1") : failed to sync secret cache: timed out waiting for the condition Mar 12 13:12:46 crc kubenswrapper[4778]: E0312 13:12:46.052788 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-srv-cert podName:12abcb2c-895a-46af-9c26-66e358259ce9 nodeName:}" failed. No retries permitted until 2026-03-12 13:12:46.552776751 +0000 UTC m=+185.001472187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-srv-cert") pod "catalog-operator-68c6474976-xcfg6" (UID: "12abcb2c-895a-46af-9c26-66e358259ce9") : failed to sync secret cache: timed out waiting for the condition Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.069042 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.089772 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.109314 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.129273 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.150031 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.169283 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.200849 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.209596 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.230475 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.250161 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.271799 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.289996 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.309570 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.328981 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.349665 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.368768 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.388972 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.409157 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.428930 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.449480 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.468971 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.488895 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.509101 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.529492 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.549452 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.569103 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.586790 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-srv-cert\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.586872 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bf6158-393f-4423-9255-345581ec5bf1-serving-cert\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.586942 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bf6158-393f-4423-9255-345581ec5bf1-config\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.587328 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.588857 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.589042 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bf6158-393f-4423-9255-345581ec5bf1-config\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.589834 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.592299 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/12abcb2c-895a-46af-9c26-66e358259ce9-srv-cert\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.593681 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32bf6158-393f-4423-9255-345581ec5bf1-serving-cert\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.610215 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.629068 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.648648 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.675001 4778 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.689459 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.738223 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bl9\" (UniqueName: \"kubernetes.io/projected/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-kube-api-access-c9bl9\") pod \"controller-manager-879f6c89f-pgrb5\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.759621 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7cpc\" (UniqueName: \"kubernetes.io/projected/f0a4c9a9-348c-4271-b466-4b94f11b2c7c-kube-api-access-p7cpc\") pod \"authentication-operator-69f744f599-8dkpx\" (UID: \"f0a4c9a9-348c-4271-b466-4b94f11b2c7c\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.768441 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhjj\" (UniqueName: \"kubernetes.io/projected/de34cf46-4b6a-4f7a-8225-eb77bec57450-kube-api-access-rhhjj\") pod \"apiserver-76f77b778f-xz42x\" (UID: \"de34cf46-4b6a-4f7a-8225-eb77bec57450\") " pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.787472 4778 request.go:700] Waited for 1.94666237s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.789006 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkkl7\" (UniqueName: \"kubernetes.io/projected/e2967620-e2ce-4763-8a6c-e5a37f3a1f98-kube-api-access-pkkl7\") pod \"machine-api-operator-5694c8668f-242cb\" (UID: \"e2967620-e2ce-4763-8a6c-e5a37f3a1f98\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.809470 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t5m5\" (UniqueName: \"kubernetes.io/projected/53a87d9e-095f-4669-b121-0b2c88e5fabb-kube-api-access-2t5m5\") pod \"openshift-apiserver-operator-796bbdcf4f-dh8l6\" (UID: \"53a87d9e-095f-4669-b121-0b2c88e5fabb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.817498 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.828384 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smq82\" (UniqueName: \"kubernetes.io/projected/e9b292b2-1928-45d2-ad7f-8d510ebaa771-kube-api-access-smq82\") pod \"cluster-samples-operator-665b6dd947-tplzm\" (UID: \"e9b292b2-1928-45d2-ad7f-8d510ebaa771\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.848912 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56zt\" (UniqueName: \"kubernetes.io/projected/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-kube-api-access-k56zt\") pod \"route-controller-manager-6576b87f9c-zpgxh\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.853059 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.860515 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.862861 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7xw2\" (UniqueName: \"kubernetes.io/projected/f56ab022-7fcd-406c-b308-b8d5f93a8b55-kube-api-access-b7xw2\") pod \"openshift-config-operator-7777fb866f-x4bxj\" (UID: \"f56ab022-7fcd-406c-b308-b8d5f93a8b55\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.871085 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.899730 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrlvl\" (UniqueName: \"kubernetes.io/projected/f36ec67c-df24-46ce-94b9-10619822c15a-kube-api-access-xrlvl\") pod \"oauth-openshift-558db77b4-5kw4v\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.919339 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgc8\" (UniqueName: \"kubernetes.io/projected/4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd-kube-api-access-njgc8\") pod \"machine-approver-56656f9798-qxqsb\" (UID: \"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.967895 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmb4r\" (UniqueName: \"kubernetes.io/projected/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-kube-api-access-pmb4r\") pod \"marketplace-operator-79b997595-2wqm5\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:46 crc kubenswrapper[4778]: I0312 13:12:46.997572 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.007256 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a0d33ee6-3a31-4464-b401-7469bf04d240-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dqxml\" (UID: \"a0d33ee6-3a31-4464-b401-7469bf04d240\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.011610 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.022037 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrcs\" (UniqueName: \"kubernetes.io/projected/f57a417a-5175-4210-98a0-69e579c22e14-kube-api-access-hsrcs\") pod \"console-operator-58897d9998-ww8lt\" (UID: \"f57a417a-5175-4210-98a0-69e579c22e14\") " pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.031079 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cl6l\" (UniqueName: \"kubernetes.io/projected/f799c7e9-1c31-40bc-9ece-06a086683a98-kube-api-access-6cl6l\") pod \"control-plane-machine-set-operator-78cbb6b69f-zkrqr\" (UID: \"f799c7e9-1c31-40bc-9ece-06a086683a98\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.038310 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.046528 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.050651 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqbjv\" (UniqueName: \"kubernetes.io/projected/c825022c-79bc-44ae-bc64-ee9614aafe25-kube-api-access-rqbjv\") pod \"console-f9d7485db-xwwxp\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.065402 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ac3e8bc-e165-45d4-8c32-1ccda9769857-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qtkq6\" (UID: \"0ac3e8bc-e165-45d4-8c32-1ccda9769857\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.079657 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-242cb"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.084805 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5gts\" (UniqueName: \"kubernetes.io/projected/78cc82c7-719e-43ad-926f-a387e0845219-kube-api-access-n5gts\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:47 crc kubenswrapper[4778]: W0312 13:12:47.091731 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2967620_e2ce_4763_8a6c_e5a37f3a1f98.slice/crio-ebee3cbb87a7a15df3d9290e795eb64729bc3bba990be43c53ba64e5f73c2ce6 WatchSource:0}: Error finding container ebee3cbb87a7a15df3d9290e795eb64729bc3bba990be43c53ba64e5f73c2ce6: Status 404 returned error can't find the container with id ebee3cbb87a7a15df3d9290e795eb64729bc3bba990be43c53ba64e5f73c2ce6 Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.103741 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78cc82c7-719e-43ad-926f-a387e0845219-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q677m\" (UID: \"78cc82c7-719e-43ad-926f-a387e0845219\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.122972 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk6h6\" (UniqueName: \"kubernetes.io/projected/5bb00a46-7425-4d14-a10c-779a5036bba6-kube-api-access-rk6h6\") pod \"apiserver-7bbb656c7d-vnndl\" (UID: \"5bb00a46-7425-4d14-a10c-779a5036bba6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.128037 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.134758 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.144779 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfpd8\" (UniqueName: \"kubernetes.io/projected/c320d1aa-c376-41f2-ac5a-8432120b68e0-kube-api-access-lfpd8\") pod \"package-server-manager-789f6589d5-kc7s7\" (UID: \"c320d1aa-c376-41f2-ac5a-8432120b68e0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.145112 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.167893 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98vx9\" (UniqueName: \"kubernetes.io/projected/84bb574a-c91e-4720-83c6-6c47c9344ad2-kube-api-access-98vx9\") pod \"openshift-controller-manager-operator-756b6f6bc6-mtlvl\" (UID: \"84bb574a-c91e-4720-83c6-6c47c9344ad2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.179430 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" event={"ID":"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd","Type":"ContainerStarted","Data":"52a4dad447678979841752bb0c254c6980c88948a0ddfa89288cb5ac62331582"} Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.187240 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.187661 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" event={"ID":"e2967620-e2ce-4763-8a6c-e5a37f3a1f98","Type":"ContainerStarted","Data":"ebee3cbb87a7a15df3d9290e795eb64729bc3bba990be43c53ba64e5f73c2ce6"} Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.191853 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b9j4\" (UniqueName: \"kubernetes.io/projected/8af48f77-25f7-49ca-8bcb-2481aa72ee66-kube-api-access-2b9j4\") pod \"downloads-7954f5f757-mx6kn\" (UID: \"8af48f77-25f7-49ca-8bcb-2481aa72ee66\") " pod="openshift-console/downloads-7954f5f757-mx6kn" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.208004 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx5wn\" (UniqueName: \"kubernetes.io/projected/5c8d947a-b62b-4eb9-81d7-94530285e8dc-kube-api-access-bx5wn\") pod \"router-default-5444994796-ms5xq\" (UID: \"5c8d947a-b62b-4eb9-81d7-94530285e8dc\") " pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.211458 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.217025 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.227979 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5rtt\" (UniqueName: \"kubernetes.io/projected/3373fbdf-245c-4e98-8bd7-7ad30eb98d76-kube-api-access-z5rtt\") pod \"olm-operator-6b444d44fb-mfjpc\" (UID: \"3373fbdf-245c-4e98-8bd7-7ad30eb98d76\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.239746 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.243929 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wqm5"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.244576 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9vx5\" (UniqueName: \"kubernetes.io/projected/8f4e3ccc-83e5-40ae-bac2-a5bb1362a531-kube-api-access-t9vx5\") pod \"machine-config-operator-74547568cd-k6dcl\" (UID: \"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.259613 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pgrb5"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.262532 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22194d8c-315e-46b9-a23b-daab9d020ce4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sskj6\" (UID: \"22194d8c-315e-46b9-a23b-daab9d020ce4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:47 crc kubenswrapper[4778]: W0312 13:12:47.274450 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f4aaf5_c17b_4cd8_9284_6df37f1c2f2d.slice/crio-9ea9ce91a5458d09f7e543bf678a01cfeb2e8462d6860a8c5523bea49359f807 WatchSource:0}: Error finding container 9ea9ce91a5458d09f7e543bf678a01cfeb2e8462d6860a8c5523bea49359f807: Status 404 returned error can't find the container with id 9ea9ce91a5458d09f7e543bf678a01cfeb2e8462d6860a8c5523bea49359f807 Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.280890 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.286483 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.290427 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwz6k\" (UniqueName: \"kubernetes.io/projected/12abcb2c-895a-46af-9c26-66e358259ce9-kube-api-access-nwz6k\") pod \"catalog-operator-68c6474976-xcfg6\" (UID: \"12abcb2c-895a-46af-9c26-66e358259ce9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.292659 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.298875 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.303913 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6h7h\" (UniqueName: \"kubernetes.io/projected/9f210efd-2ac0-4b67-89c5-fcd9f52f6e01-kube-api-access-b6h7h\") pod \"auto-csr-approver-29555352-q7fvr\" (UID: \"9f210efd-2ac0-4b67-89c5-fcd9f52f6e01\") " pod="openshift-infra/auto-csr-approver-29555352-q7fvr" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.304271 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.320159 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.326159 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.328606 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88x7r\" (UniqueName: \"kubernetes.io/projected/a240fd7b-5854-4548-a847-e5590111964b-kube-api-access-88x7r\") pod \"collect-profiles-29555340-7tvjm\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.341512 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.343581 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8dkpx"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.347663 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlpsj\" (UniqueName: \"kubernetes.io/projected/30ff941c-3c4b-4229-af5a-78bb244a385b-kube-api-access-hlpsj\") pod \"machine-config-controller-84d6567774-x26ck\" (UID: \"30ff941c-3c4b-4229-af5a-78bb244a385b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.369764 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.372873 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grsg\" (UniqueName: \"kubernetes.io/projected/f1f25dae-f3e4-481d-8451-4851b60b2ec4-kube-api-access-9grsg\") pod \"multus-admission-controller-857f4d67dd-98lbj\" (UID: \"f1f25dae-f3e4-481d-8451-4851b60b2ec4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.390029 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kw4v"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.393569 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9k6q\" (UniqueName: \"kubernetes.io/projected/32bf6158-393f-4423-9255-345581ec5bf1-kube-api-access-p9k6q\") pod \"service-ca-operator-777779d784-pg48j\" (UID: \"32bf6158-393f-4423-9255-345581ec5bf1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.395440 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.410470 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfg2m\" (UniqueName: \"kubernetes.io/projected/486a990d-7a56-4eea-a44d-d05a412718c2-kube-api-access-nfg2m\") pod \"dns-operator-744455d44c-2z5gg\" (UID: \"486a990d-7a56-4eea-a44d-d05a412718c2\") " pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.423378 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.424562 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58p6m\" (UniqueName: \"kubernetes.io/projected/2be5b8df-aaff-4a2b-9b54-78a7e58bc420-kube-api-access-58p6m\") pod \"csi-hostpathplugin-cp2lw\" (UID: \"2be5b8df-aaff-4a2b-9b54-78a7e58bc420\") " pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.445062 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.452092 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6t2k\" (UniqueName: \"kubernetes.io/projected/af58c501-1c93-4f7a-bdf9-1255879aea5a-kube-api-access-n6t2k\") pod \"kube-storage-version-migrator-operator-b67b599dd-wxkb2\" (UID: \"af58c501-1c93-4f7a-bdf9-1255879aea5a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.460384 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xz42x"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.460690 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.478218 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dnkc\" (UniqueName: \"kubernetes.io/projected/a875bbd5-0126-4d1c-8b7e-97ac32863981-kube-api-access-4dnkc\") pod \"migrator-59844c95c7-r2r62\" (UID: \"a875bbd5-0126-4d1c-8b7e-97ac32863981\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.478403 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mx6kn" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.481550 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.497062 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.498657 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.503714 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.512422 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.513490 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqmqc\" (UniqueName: \"kubernetes.io/projected/138bb189-6182-4210-91a7-140f93f36f81-kube-api-access-bqmqc\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.513555 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ee714f-fb23-4420-9e70-1b3134eea18e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.513601 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ee714f-fb23-4420-9e70-1b3134eea18e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.513625 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfp8b\" (UniqueName: \"kubernetes.io/projected/5b3e2f12-fdec-46e9-82b4-6777c07281c6-kube-api-access-qfp8b\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.513666 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2f12-fdec-46e9-82b4-6777c07281c6-tmpfs\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.513868 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qmh\" (UniqueName: \"kubernetes.io/projected/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-kube-api-access-h8qmh\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.513956 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-tls\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.514082 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/30697403-66e5-4f68-8e2f-804017bd9d71-signing-key\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.514114 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/658a38d7-a172-432e-a612-6e8cf83f17a2-serving-cert\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.514153 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbz7v\" (UniqueName: \"kubernetes.io/projected/658a38d7-a172-432e-a612-6e8cf83f17a2-kube-api-access-lbz7v\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.514753 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-certs\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.514880 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnjq5\" (UniqueName: \"kubernetes.io/projected/30697403-66e5-4f68-8e2f-804017bd9d71-kube-api-access-qnjq5\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515331 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-trusted-ca\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515377 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/30697403-66e5-4f68-8e2f-804017bd9d71-signing-cabundle\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515422 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-client\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515466 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-node-bootstrap-token\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515496 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515523 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b3e2f12-fdec-46e9-82b4-6777c07281c6-webhook-cert\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515543 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-service-ca\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515567 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvrg\" (UniqueName: \"kubernetes.io/projected/4d54f13d-85d8-4c95-acef-fcf9f197769a-kube-api-access-qbvrg\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515613 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-config\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515646 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7a887dd-1794-4d66-90a6-299512f32bd1-cert\") pod \"ingress-canary-d562t\" (UID: \"b7a887dd-1794-4d66-90a6-299512f32bd1\") " pod="openshift-ingress-canary/ingress-canary-d562t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515669 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kb4k\" (UniqueName: \"kubernetes.io/projected/b7a887dd-1794-4d66-90a6-299512f32bd1-kube-api-access-4kb4k\") pod \"ingress-canary-d562t\" (UID: \"b7a887dd-1794-4d66-90a6-299512f32bd1\") " pod="openshift-ingress-canary/ingress-canary-d562t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515691 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d54f13d-85d8-4c95-acef-fcf9f197769a-trusted-ca\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515748 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgbd\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-kube-api-access-mtgbd\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515783 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-bound-sa-token\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515803 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/138bb189-6182-4210-91a7-140f93f36f81-config-volume\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515836 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/138bb189-6182-4210-91a7-140f93f36f81-metrics-tls\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515852 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-ca\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515917 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-certificates\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515938 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b3e2f12-fdec-46e9-82b4-6777c07281c6-apiservice-cert\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515960 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d54f13d-85d8-4c95-acef-fcf9f197769a-metrics-tls\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.515979 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d54f13d-85d8-4c95-acef-fcf9f197769a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.521665 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml"] Mar 12 13:12:47 crc kubenswrapper[4778]: E0312 13:12:47.524423 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.024396438 +0000 UTC m=+186.473091834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.526530 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.535703 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.624577 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:47 crc kubenswrapper[4778]: E0312 13:12:47.624739 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.124713792 +0000 UTC m=+186.573409188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625116 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-bound-sa-token\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625176 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/138bb189-6182-4210-91a7-140f93f36f81-config-volume\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625230 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/138bb189-6182-4210-91a7-140f93f36f81-metrics-tls\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625252 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-ca\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625395 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-certificates\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625414 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b3e2f12-fdec-46e9-82b4-6777c07281c6-apiservice-cert\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625457 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d54f13d-85d8-4c95-acef-fcf9f197769a-metrics-tls\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625501 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d54f13d-85d8-4c95-acef-fcf9f197769a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625548 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ee714f-fb23-4420-9e70-1b3134eea18e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625567 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqmqc\" (UniqueName: \"kubernetes.io/projected/138bb189-6182-4210-91a7-140f93f36f81-kube-api-access-bqmqc\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625595 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ee714f-fb23-4420-9e70-1b3134eea18e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625613 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfp8b\" (UniqueName: \"kubernetes.io/projected/5b3e2f12-fdec-46e9-82b4-6777c07281c6-kube-api-access-qfp8b\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625638 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2f12-fdec-46e9-82b4-6777c07281c6-tmpfs\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625752 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qmh\" (UniqueName: \"kubernetes.io/projected/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-kube-api-access-h8qmh\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625796 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-tls\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625821 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/30697403-66e5-4f68-8e2f-804017bd9d71-signing-key\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625838 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/658a38d7-a172-432e-a612-6e8cf83f17a2-serving-cert\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625899 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbz7v\" (UniqueName: \"kubernetes.io/projected/658a38d7-a172-432e-a612-6e8cf83f17a2-kube-api-access-lbz7v\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625964 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnjq5\" (UniqueName: \"kubernetes.io/projected/30697403-66e5-4f68-8e2f-804017bd9d71-kube-api-access-qnjq5\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.625981 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-certs\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626029 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-trusted-ca\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626046 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/30697403-66e5-4f68-8e2f-804017bd9d71-signing-cabundle\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626076 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-client\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626236 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-node-bootstrap-token\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626304 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626337 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b3e2f12-fdec-46e9-82b4-6777c07281c6-webhook-cert\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626373 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-service-ca\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626400 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbvrg\" (UniqueName: \"kubernetes.io/projected/4d54f13d-85d8-4c95-acef-fcf9f197769a-kube-api-access-qbvrg\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626527 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-config\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626637 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7a887dd-1794-4d66-90a6-299512f32bd1-cert\") pod \"ingress-canary-d562t\" (UID: \"b7a887dd-1794-4d66-90a6-299512f32bd1\") " pod="openshift-ingress-canary/ingress-canary-d562t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626656 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d54f13d-85d8-4c95-acef-fcf9f197769a-trusted-ca\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626694 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kb4k\" (UniqueName: \"kubernetes.io/projected/b7a887dd-1794-4d66-90a6-299512f32bd1-kube-api-access-4kb4k\") pod \"ingress-canary-d562t\" (UID: \"b7a887dd-1794-4d66-90a6-299512f32bd1\") " pod="openshift-ingress-canary/ingress-canary-d562t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.626798 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtgbd\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-kube-api-access-mtgbd\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: E0312 13:12:47.627848 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.127834419 +0000 UTC m=+186.576529815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.628656 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-config\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.631507 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-service-ca\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.631703 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-ca\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.631741 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-certificates\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.631809 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/138bb189-6182-4210-91a7-140f93f36f81-config-volume\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.633227 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-trusted-ca\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.633823 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ee714f-fb23-4420-9e70-1b3134eea18e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.634533 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.647625 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/30697403-66e5-4f68-8e2f-804017bd9d71-signing-cabundle\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.648455 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/30697403-66e5-4f68-8e2f-804017bd9d71-signing-key\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.656501 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7a887dd-1794-4d66-90a6-299512f32bd1-cert\") pod \"ingress-canary-d562t\" (UID: \"b7a887dd-1794-4d66-90a6-299512f32bd1\") " pod="openshift-ingress-canary/ingress-canary-d562t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.658949 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.659884 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5b3e2f12-fdec-46e9-82b4-6777c07281c6-tmpfs\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.660018 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b3e2f12-fdec-46e9-82b4-6777c07281c6-apiservice-cert\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.660292 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/658a38d7-a172-432e-a612-6e8cf83f17a2-etcd-client\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.660412 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/138bb189-6182-4210-91a7-140f93f36f81-metrics-tls\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.660417 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/658a38d7-a172-432e-a612-6e8cf83f17a2-serving-cert\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.660889 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-certs\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.660990 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-tls\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.661330 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4d54f13d-85d8-4c95-acef-fcf9f197769a-metrics-tls\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.662322 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d54f13d-85d8-4c95-acef-fcf9f197769a-trusted-ca\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.663474 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b3e2f12-fdec-46e9-82b4-6777c07281c6-webhook-cert\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.664414 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ee714f-fb23-4420-9e70-1b3134eea18e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.677448 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtgbd\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-kube-api-access-mtgbd\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.678286 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-node-bootstrap-token\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.683473 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.683546 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-bound-sa-token\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.693319 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.704942 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.717837 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qmh\" (UniqueName: \"kubernetes.io/projected/d6b107a5-befb-4e43-9aa6-6b66ff686bf0-kube-api-access-h8qmh\") pod \"machine-config-server-qf4nv\" (UID: \"d6b107a5-befb-4e43-9aa6-6b66ff686bf0\") " pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.727766 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:47 crc kubenswrapper[4778]: E0312 13:12:47.727865 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.227843345 +0000 UTC m=+186.676538741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.728011 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: E0312 13:12:47.728369 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.228361959 +0000 UTC m=+186.677057355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.728480 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnjq5\" (UniqueName: \"kubernetes.io/projected/30697403-66e5-4f68-8e2f-804017bd9d71-kube-api-access-qnjq5\") pod \"service-ca-9c57cc56f-vpp8t\" (UID: \"30697403-66e5-4f68-8e2f-804017bd9d71\") " pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.748961 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d54f13d-85d8-4c95-acef-fcf9f197769a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.768161 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbvrg\" (UniqueName: \"kubernetes.io/projected/4d54f13d-85d8-4c95-acef-fcf9f197769a-kube-api-access-qbvrg\") pod \"ingress-operator-5b745b69d9-srhvx\" (UID: \"4d54f13d-85d8-4c95-acef-fcf9f197769a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.769536 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qf4nv" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.783461 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ww8lt"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.783505 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.791164 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbz7v\" (UniqueName: \"kubernetes.io/projected/658a38d7-a172-432e-a612-6e8cf83f17a2-kube-api-access-lbz7v\") pod \"etcd-operator-b45778765-bdcvl\" (UID: \"658a38d7-a172-432e-a612-6e8cf83f17a2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.813887 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqmqc\" (UniqueName: \"kubernetes.io/projected/138bb189-6182-4210-91a7-140f93f36f81-kube-api-access-bqmqc\") pod \"dns-default-8zmxq\" (UID: \"138bb189-6182-4210-91a7-140f93f36f81\") " pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.823682 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xwwxp"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.830179 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:47 crc kubenswrapper[4778]: E0312 13:12:47.830770 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.3307004 +0000 UTC m=+186.779395796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.842825 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kb4k\" (UniqueName: \"kubernetes.io/projected/b7a887dd-1794-4d66-90a6-299512f32bd1-kube-api-access-4kb4k\") pod \"ingress-canary-d562t\" (UID: \"b7a887dd-1794-4d66-90a6-299512f32bd1\") " pod="openshift-ingress-canary/ingress-canary-d562t" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.912221 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.923168 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfp8b\" (UniqueName: \"kubernetes.io/projected/5b3e2f12-fdec-46e9-82b4-6777c07281c6-kube-api-access-qfp8b\") pod \"packageserver-d55dfcdfc-5m8sg\" (UID: \"5b3e2f12-fdec-46e9-82b4-6777c07281c6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.923621 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.931687 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:47 crc kubenswrapper[4778]: E0312 13:12:47.932079 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.432064312 +0000 UTC m=+186.880759708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.934222 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl"] Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.941974 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:47 crc kubenswrapper[4778]: I0312 13:12:47.950129 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.032781 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.033360 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.533344474 +0000 UTC m=+186.982039870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.036241 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" Mar 12 13:12:48 crc kubenswrapper[4778]: W0312 13:12:48.044846 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac3e8bc_e165_45d4_8c32_1ccda9769857.slice/crio-5ec74c62b6d6388b4618514bfbb582c3c32d8d7869b55d6601fae9a433a2c7d7 WatchSource:0}: Error finding container 5ec74c62b6d6388b4618514bfbb582c3c32d8d7869b55d6601fae9a433a2c7d7: Status 404 returned error can't find the container with id 5ec74c62b6d6388b4618514bfbb582c3c32d8d7869b55d6601fae9a433a2c7d7 Mar 12 13:12:48 crc kubenswrapper[4778]: W0312 13:12:48.051200 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78cc82c7_719e_43ad_926f_a387e0845219.slice/crio-3923e5748c928aead4163fbcc025b65be3ed9352674f1cc94d77747ffce996b0 WatchSource:0}: Error finding container 3923e5748c928aead4163fbcc025b65be3ed9352674f1cc94d77747ffce996b0: Status 404 returned error can't find the container with id 3923e5748c928aead4163fbcc025b65be3ed9352674f1cc94d77747ffce996b0 Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.054408 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.112886 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.122944 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d562t" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.137889 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.138267 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.638255017 +0000 UTC m=+187.086950413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.210848 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" event={"ID":"84bb574a-c91e-4720-83c6-6c47c9344ad2","Type":"ContainerStarted","Data":"ea5df04d080d3ebc3170065546468f5918280472695bcdb160c370eb1e447245"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.215910 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" event={"ID":"a0d33ee6-3a31-4464-b401-7469bf04d240","Type":"ContainerStarted","Data":"fb497e12ecda0a19b04837d910aacaba7192beb584f873e810c0be5a336568fe"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.221313 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" event={"ID":"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd","Type":"ContainerStarted","Data":"d666fb76e95e18d9782e85ddfde87b5fa8c5ca2d8db40c92247c06b4ec5f46f1"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.224246 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" event={"ID":"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d","Type":"ContainerStarted","Data":"013c13acbd136a9ae3c6c39b9470a59aa4ab705637939d6af761af9e92e81b9c"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.224271 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" event={"ID":"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d","Type":"ContainerStarted","Data":"9ea9ce91a5458d09f7e543bf678a01cfeb2e8462d6860a8c5523bea49359f807"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.225586 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.228763 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" event={"ID":"f36ec67c-df24-46ce-94b9-10619822c15a","Type":"ContainerStarted","Data":"5f7362fc7516f559081256deebf693613a994486c74f126dfda003689ad66bff"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.230230 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" event={"ID":"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531","Type":"ContainerStarted","Data":"b9c017a33a60d1e8d34966ea3aaa51776f0f2afffd8b7a60693e8b6355a0b61d"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.238634 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.238915 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.73889994 +0000 UTC m=+187.187595326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.240974 4778 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2wqm5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.241029 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" podUID="24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.244412 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" event={"ID":"f799c7e9-1c31-40bc-9ece-06a086683a98","Type":"ContainerStarted","Data":"8c7ba888781510a0b2db17d5d30a8c1b81e1a4b73eb8d9cf02595412a85f2fa2"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.250909 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xwwxp" event={"ID":"c825022c-79bc-44ae-bc64-ee9614aafe25","Type":"ContainerStarted","Data":"6f20116905733a7dbe8802503613a6b31a51c117f53f02f55e4cace656d26f20"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.324104 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" event={"ID":"5bb00a46-7425-4d14-a10c-779a5036bba6","Type":"ContainerStarted","Data":"49e9008c989ae0099a350bef720c95a9b121ede61cd076c12105761c291b6a47"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.324138 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" event={"ID":"53a87d9e-095f-4669-b121-0b2c88e5fabb","Type":"ContainerStarted","Data":"2c3668c7619927e62a21a085b12fd8d10c5aec63a5140a4a4e25a28dcd904a3d"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.324156 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2z5gg"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.324169 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" event={"ID":"f57a417a-5175-4210-98a0-69e579c22e14","Type":"ContainerStarted","Data":"6fc8679ba8e28a5908ef6b03b6e49fa8044b6a863eb8cf84241ac40e31c137e9"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.324178 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ms5xq" event={"ID":"5c8d947a-b62b-4eb9-81d7-94530285e8dc","Type":"ContainerStarted","Data":"5dc51733fb3b2d426e2a91f0e490de5f7451da1e08244b0836e86ddceab88244"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.329391 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.333767 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" event={"ID":"e2967620-e2ce-4763-8a6c-e5a37f3a1f98","Type":"ContainerStarted","Data":"54c08081b13c792d0c2d32cd39505869a93ea17f71e1ad3e05371ec38310e46c"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.333808 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" event={"ID":"e2967620-e2ce-4763-8a6c-e5a37f3a1f98","Type":"ContainerStarted","Data":"edbd6f9ccea31acf5504e98ff7cf249d797a97e8e3fea3b85457c0a95e2146b1"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.336681 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" event={"ID":"f0a4c9a9-348c-4271-b466-4b94f11b2c7c","Type":"ContainerStarted","Data":"06788d28c2ebf93e6930e30ff45162a3683351862e845c812a0ae45bcf63771c"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.336713 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" event={"ID":"f0a4c9a9-348c-4271-b466-4b94f11b2c7c","Type":"ContainerStarted","Data":"5fbdeb3453e8c6fd6686a0aefbf03a12b59237ee3890cbc097329b1e74c72168"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.339477 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.340591 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" event={"ID":"0ac3e8bc-e165-45d4-8c32-1ccda9769857","Type":"ContainerStarted","Data":"5ec74c62b6d6388b4618514bfbb582c3c32d8d7869b55d6601fae9a433a2c7d7"} Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.340761 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.840749457 +0000 UTC m=+187.289444853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.343017 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" event={"ID":"e9b292b2-1928-45d2-ad7f-8d510ebaa771","Type":"ContainerStarted","Data":"342fbb0d2f378f9d2820298cbb5d378552da4f5ad43e0589912c2f9f56bf877e"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.353164 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" event={"ID":"06ffdff1-2f10-4f38-b7fd-b98e883bbc63","Type":"ContainerStarted","Data":"58650ee0315d5aac50c162f5420d39a44557cb90a0d565bd9b299a8e4ee0251d"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.353242 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.358515 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" event={"ID":"78cc82c7-719e-43ad-926f-a387e0845219","Type":"ContainerStarted","Data":"3923e5748c928aead4163fbcc025b65be3ed9352674f1cc94d77747ffce996b0"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.363468 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" event={"ID":"f56ab022-7fcd-406c-b308-b8d5f93a8b55","Type":"ContainerStarted","Data":"3262b29f82e6028fc91389bbd3c254283d8f35d9f9b9a6a3e6427d8f37539038"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.366287 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" event={"ID":"de34cf46-4b6a-4f7a-8225-eb77bec57450","Type":"ContainerStarted","Data":"fa6d0f6d39632346cafc98f07344ee0ccaae7084380a4d10d518d946f29de2d8"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.371942 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" event={"ID":"06bbf7b7-3e40-4aa0-a3db-a56897f5488c","Type":"ContainerStarted","Data":"baecc290d5904f2078cb76008ee3fad41b6baea1393aa1ce14dba9ed727aca24"} Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.372384 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.401355 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mx6kn"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.404450 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.407515 4778 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zpgxh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.407546 4778 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pgrb5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.407558 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" podUID="06ffdff1-2f10-4f38-b7fd-b98e883bbc63" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.407588 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" podUID="06bbf7b7-3e40-4aa0-a3db-a56897f5488c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.440056 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.442727 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:48.942707907 +0000 UTC m=+187.391403303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.496583 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.511875 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.542059 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.542368 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.042357032 +0000 UTC m=+187.491052428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: W0312 13:12:48.592958 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3373fbdf_245c_4e98_8bd7_7ad30eb98d76.slice/crio-c7dc6a556a0a2c75eb762cfae0c9f40b708a2376fe099f8c49e0c6dd2eec3cb4 WatchSource:0}: Error finding container c7dc6a556a0a2c75eb762cfae0c9f40b708a2376fe099f8c49e0c6dd2eec3cb4: Status 404 returned error can't find the container with id c7dc6a556a0a2c75eb762cfae0c9f40b708a2376fe099f8c49e0c6dd2eec3cb4 Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.643260 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.643679 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.143664684 +0000 UTC m=+187.592360080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.665969 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cp2lw"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.747065 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555352-q7fvr"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.749014 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.749883 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.249871494 +0000 UTC m=+187.698566890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.751341 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.755154 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-98lbj"] Mar 12 13:12:48 crc kubenswrapper[4778]: W0312 13:12:48.776591 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2be5b8df_aaff_4a2b_9b54_78a7e58bc420.slice/crio-29eb382f6373a1343bfe9a9453bc1e1e657ed189d7535c7e86f8a9880039a35d WatchSource:0}: Error finding container 29eb382f6373a1343bfe9a9453bc1e1e657ed189d7535c7e86f8a9880039a35d: Status 404 returned error can't find the container with id 29eb382f6373a1343bfe9a9453bc1e1e657ed189d7535c7e86f8a9880039a35d Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.784064 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.801502 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pg48j"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.805506 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.829531 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.837292 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.853456 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.853780 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.353764479 +0000 UTC m=+187.802459875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.891130 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vpp8t"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.912335 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg"] Mar 12 13:12:48 crc kubenswrapper[4778]: W0312 13:12:48.913744 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32bf6158_393f_4423_9255_345581ec5bf1.slice/crio-87a986e13c20bc428422448d907cce18d39f5ca108a854a847f92de1dc3e4f14 WatchSource:0}: Error finding container 87a986e13c20bc428422448d907cce18d39f5ca108a854a847f92de1dc3e4f14: Status 404 returned error can't find the container with id 87a986e13c20bc428422448d907cce18d39f5ca108a854a847f92de1dc3e4f14 Mar 12 13:12:48 crc kubenswrapper[4778]: W0312 13:12:48.924176 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30697403_66e5_4f68_8e2f_804017bd9d71.slice/crio-86c7d034ba551e67a3168224247007b40f929e6bb1ea121361fae6fadf6d5a7d WatchSource:0}: Error finding container 86c7d034ba551e67a3168224247007b40f929e6bb1ea121361fae6fadf6d5a7d: Status 404 returned error can't find the container with id 86c7d034ba551e67a3168224247007b40f929e6bb1ea121361fae6fadf6d5a7d Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.925967 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bdcvl"] Mar 12 13:12:48 crc kubenswrapper[4778]: I0312 13:12:48.954825 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:48 crc kubenswrapper[4778]: E0312 13:12:48.955133 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.455115022 +0000 UTC m=+187.903810418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: W0312 13:12:49.016503 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf58c501_1c93_4f7a_bdf9_1255879aea5a.slice/crio-2db6fb0d9756a7537c2816ec341008d7b493b3a4c0ad299afe1f4ec5795d8c6c WatchSource:0}: Error finding container 2db6fb0d9756a7537c2816ec341008d7b493b3a4c0ad299afe1f4ec5795d8c6c: Status 404 returned error can't find the container with id 2db6fb0d9756a7537c2816ec341008d7b493b3a4c0ad299afe1f4ec5795d8c6c Mar 12 13:12:49 crc kubenswrapper[4778]: W0312 13:12:49.045076 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod658a38d7_a172_432e_a612_6e8cf83f17a2.slice/crio-b881df9a25cfeae23f99dc528674ad3847aed0b982acf84c8642ced679390821 WatchSource:0}: Error finding container b881df9a25cfeae23f99dc528674ad3847aed0b982acf84c8642ced679390821: Status 404 returned error can't find the container with id b881df9a25cfeae23f99dc528674ad3847aed0b982acf84c8642ced679390821 Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.055838 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.056449 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.556427474 +0000 UTC m=+188.005122870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.056589 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.056931 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.556917737 +0000 UTC m=+188.005613133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: W0312 13:12:49.073571 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3e2f12_fdec_46e9_82b4_6777c07281c6.slice/crio-2e0ff4520d9ba7a3f7dbf106a17656ddac6fc701b516e47132bdb358130a1ca7 WatchSource:0}: Error finding container 2e0ff4520d9ba7a3f7dbf106a17656ddac6fc701b516e47132bdb358130a1ca7: Status 404 returned error can't find the container with id 2e0ff4520d9ba7a3f7dbf106a17656ddac6fc701b516e47132bdb358130a1ca7 Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.092762 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8zmxq"] Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.140870 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d562t"] Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.158205 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.158665 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.658645261 +0000 UTC m=+188.107340657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.168095 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx"] Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.261744 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.262617 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.762603648 +0000 UTC m=+188.211299044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.320093 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" podStartSLOduration=138.320073125 podStartE2EDuration="2m18.320073125s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.294854165 +0000 UTC m=+187.743549551" watchObservedRunningTime="2026-03-12 13:12:49.320073125 +0000 UTC m=+187.768768521" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.332733 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-242cb" podStartSLOduration=138.332715861 podStartE2EDuration="2m18.332715861s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.332073343 +0000 UTC m=+187.780768749" watchObservedRunningTime="2026-03-12 13:12:49.332715861 +0000 UTC m=+187.781411257" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.371358 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.371693 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.871663508 +0000 UTC m=+188.320358904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.372314 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.372725 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.872713187 +0000 UTC m=+188.321408583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.374616 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8dkpx" podStartSLOduration=138.37459708 podStartE2EDuration="2m18.37459708s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.368681334 +0000 UTC m=+187.817376730" watchObservedRunningTime="2026-03-12 13:12:49.37459708 +0000 UTC m=+187.823292466" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.416859 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" podStartSLOduration=138.41684316 podStartE2EDuration="2m18.41684316s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.411621853 +0000 UTC m=+187.860317269" watchObservedRunningTime="2026-03-12 13:12:49.41684316 +0000 UTC m=+187.865538556" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.425440 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" event={"ID":"f1f25dae-f3e4-481d-8451-4851b60b2ec4","Type":"ContainerStarted","Data":"980e19e30d4bff72a9a681bc0d69f3af04c689f5aa783b2316fc9fd546ecb7fc"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.434125 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" event={"ID":"486a990d-7a56-4eea-a44d-d05a412718c2","Type":"ContainerStarted","Data":"4eb1b68e02020ce18c1ad52f034f48941036714a902575ef015c2beb14437523"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.440303 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" podStartSLOduration=138.440286819 podStartE2EDuration="2m18.440286819s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.439576469 +0000 UTC m=+187.888271875" watchObservedRunningTime="2026-03-12 13:12:49.440286819 +0000 UTC m=+187.888982215" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.448943 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" event={"ID":"06ffdff1-2f10-4f38-b7fd-b98e883bbc63","Type":"ContainerStarted","Data":"c400bf292471252407338cf73137e8439d1cb8b7e278bdf9b5b3d6aae90e459c"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.453454 4778 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zpgxh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.453536 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" podUID="06ffdff1-2f10-4f38-b7fd-b98e883bbc63" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.454618 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" event={"ID":"e9b292b2-1928-45d2-ad7f-8d510ebaa771","Type":"ContainerStarted","Data":"aa008fadb4fb70e34dc62c4d4edec425d04c61e4ab43c6809858298b30f7e0b2"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.454652 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" event={"ID":"e9b292b2-1928-45d2-ad7f-8d510ebaa771","Type":"ContainerStarted","Data":"41a87eacc6997d4a29bfede342bec53085fb9489c3d7aec894bc78e028df8993"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.460907 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" event={"ID":"c320d1aa-c376-41f2-ac5a-8432120b68e0","Type":"ContainerStarted","Data":"360be70e7ac69f128077b26106225e7a87b799223ff300c1d31749c0366fb357"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.464997 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" event={"ID":"12abcb2c-895a-46af-9c26-66e358259ce9","Type":"ContainerStarted","Data":"11e6771f10af52fccfb643aa93b8470ecfa9d97140fcc631bc68263159588700"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.469647 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" event={"ID":"9f210efd-2ac0-4b67-89c5-fcd9f52f6e01","Type":"ContainerStarted","Data":"0a2c8918cbacef5d63ed30076a63c59219bb878177978f4909e3ed43cb24db19"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.473118 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.473554 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:49.973535355 +0000 UTC m=+188.422230751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.473714 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xwwxp" event={"ID":"c825022c-79bc-44ae-bc64-ee9614aafe25","Type":"ContainerStarted","Data":"4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.482041 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" event={"ID":"a875bbd5-0126-4d1c-8b7e-97ac32863981","Type":"ContainerStarted","Data":"b713fd37291982fd9495c9de930b3c8ac415358b4d18b32e60fd3184fdff6b4b"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.484335 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" podStartSLOduration=138.484320899 podStartE2EDuration="2m18.484320899s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.481814498 +0000 UTC m=+187.930509904" watchObservedRunningTime="2026-03-12 13:12:49.484320899 +0000 UTC m=+187.933016295" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.500792 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" event={"ID":"5b3e2f12-fdec-46e9-82b4-6777c07281c6","Type":"ContainerStarted","Data":"2e0ff4520d9ba7a3f7dbf106a17656ddac6fc701b516e47132bdb358130a1ca7"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.509428 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" event={"ID":"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531","Type":"ContainerStarted","Data":"ade109f8b19aa64ebf57e12248a2627abe0ef1352cc1d8d41745864eeb9af532"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.511558 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d562t" event={"ID":"b7a887dd-1794-4d66-90a6-299512f32bd1","Type":"ContainerStarted","Data":"0e86a2a440437d426c57ac338db66979c28facc8675da461292e9e33f583870a"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.514008 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" event={"ID":"a240fd7b-5854-4548-a847-e5590111964b","Type":"ContainerStarted","Data":"aa7b81ba2e81ec0ae29a489fd430d937b62e78b889977bd807a40c2a99fb3190"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.516987 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mx6kn" event={"ID":"8af48f77-25f7-49ca-8bcb-2481aa72ee66","Type":"ContainerStarted","Data":"c3914cf8da475f980a0549d830bbdc6cf33bc02b55bdcf28abb76023c126912b"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.518100 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" event={"ID":"30ff941c-3c4b-4229-af5a-78bb244a385b","Type":"ContainerStarted","Data":"f438f9315d0d4fa6bf19d831494414bf8f34e034a962108423452f3b7bd9b01d"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.536070 4778 generic.go:334] "Generic (PLEG): container finished" podID="de34cf46-4b6a-4f7a-8225-eb77bec57450" containerID="07d9e33be479df8f83022eb8f6313a4d44e2d9c2c7660f5d3181d691ea1eb84f" exitCode=0 Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.536379 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" event={"ID":"de34cf46-4b6a-4f7a-8225-eb77bec57450","Type":"ContainerDied","Data":"07d9e33be479df8f83022eb8f6313a4d44e2d9c2c7660f5d3181d691ea1eb84f"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.537886 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh8l6" event={"ID":"53a87d9e-095f-4669-b121-0b2c88e5fabb","Type":"ContainerStarted","Data":"3096dbff8d31b67e1b4031e888fd2bedf9d23f88e4ca5453fd5c49f6afe677f2"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.541673 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" event={"ID":"f57a417a-5175-4210-98a0-69e579c22e14","Type":"ContainerStarted","Data":"fe2bd48f042fddd9bee53465bcf19ec61fc9f0fa0828cce30aec1f9731b43463"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.542599 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.545053 4778 patch_prober.go:28] interesting pod/console-operator-58897d9998-ww8lt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.547360 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8zmxq" event={"ID":"138bb189-6182-4210-91a7-140f93f36f81","Type":"ContainerStarted","Data":"0f2312151b85a47921adb39c7523ee4c5749d688fc423f8acebe62d440387cd3"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.546174 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" podUID="f57a417a-5175-4210-98a0-69e579c22e14" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.554502 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" event={"ID":"f799c7e9-1c31-40bc-9ece-06a086683a98","Type":"ContainerStarted","Data":"922c022f874c83790682af285a756b9800794f02381d448586e308a3a7bcf062"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.564005 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" event={"ID":"3373fbdf-245c-4e98-8bd7-7ad30eb98d76","Type":"ContainerStarted","Data":"c7dc6a556a0a2c75eb762cfae0c9f40b708a2376fe099f8c49e0c6dd2eec3cb4"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.565333 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" event={"ID":"32bf6158-393f-4423-9255-345581ec5bf1","Type":"ContainerStarted","Data":"87a986e13c20bc428422448d907cce18d39f5ca108a854a847f92de1dc3e4f14"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.572663 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" event={"ID":"2be5b8df-aaff-4a2b-9b54-78a7e58bc420","Type":"ContainerStarted","Data":"29eb382f6373a1343bfe9a9453bc1e1e657ed189d7535c7e86f8a9880039a35d"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.574244 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.575233 4778 generic.go:334] "Generic (PLEG): container finished" podID="5bb00a46-7425-4d14-a10c-779a5036bba6" containerID="13cf5abaaeabafcb511ad061e4e3536945084af30fcd8966603d168336a82bfd" exitCode=0 Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.575293 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" event={"ID":"5bb00a46-7425-4d14-a10c-779a5036bba6","Type":"ContainerDied","Data":"13cf5abaaeabafcb511ad061e4e3536945084af30fcd8966603d168336a82bfd"} Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.575172 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.075141856 +0000 UTC m=+188.523837252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.582178 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" event={"ID":"78cc82c7-719e-43ad-926f-a387e0845219","Type":"ContainerStarted","Data":"ecb424cf1c2ce669f160313107801d801e3e3f9d73ad7de65d932577c1112270"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.583366 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" event={"ID":"22194d8c-315e-46b9-a23b-daab9d020ce4","Type":"ContainerStarted","Data":"ab13725b474c3403d18d2ba420cbb9670185cc494c4502b035184e8bb97aaebe"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.584119 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xwwxp" podStartSLOduration=138.584102078 podStartE2EDuration="2m18.584102078s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.529587194 +0000 UTC m=+187.978282590" watchObservedRunningTime="2026-03-12 13:12:49.584102078 +0000 UTC m=+188.032797474" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.585381 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" event={"ID":"4c56cc09-5f03-4bcc-a4b1-8fed0dcc49bd","Type":"ContainerStarted","Data":"53d4fd937d7c3c94eb542b8870b67caf1f3d97f4ec36b5819e7a8c147b4fd947"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.587917 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" event={"ID":"f36ec67c-df24-46ce-94b9-10619822c15a","Type":"ContainerStarted","Data":"0fe97ea87ef2b2f3106d61689b8bc6549f4b603dd4e79e424ddbe8637587b2f3"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.588161 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.600435 4778 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5kw4v container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.600489 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" podUID="f36ec67c-df24-46ce-94b9-10619822c15a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.621484 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qxqsb" podStartSLOduration=138.62146168 podStartE2EDuration="2m18.62146168s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.620435851 +0000 UTC m=+188.069131257" watchObservedRunningTime="2026-03-12 13:12:49.62146168 +0000 UTC m=+188.070157086" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.622683 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tplzm" podStartSLOduration=138.622674174 podStartE2EDuration="2m18.622674174s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.586510286 +0000 UTC m=+188.035205682" watchObservedRunningTime="2026-03-12 13:12:49.622674174 +0000 UTC m=+188.071369570" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.635323 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ms5xq" event={"ID":"5c8d947a-b62b-4eb9-81d7-94530285e8dc","Type":"ContainerStarted","Data":"873275ea068e9fdf4f4f59d9b6f2f5449326ad5629bee3fa1e6b426dabc39a87"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.655080 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qf4nv" event={"ID":"d6b107a5-befb-4e43-9aa6-6b66ff686bf0","Type":"ContainerStarted","Data":"93b046c6b3a5704968294077b580ebb35334cb91c5606fe736f79233e433b742"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.662005 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zkrqr" podStartSLOduration=138.66198396 podStartE2EDuration="2m18.66198396s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.653102941 +0000 UTC m=+188.101798357" watchObservedRunningTime="2026-03-12 13:12:49.66198396 +0000 UTC m=+188.110679356" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.664659 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" event={"ID":"af58c501-1c93-4f7a-bdf9-1255879aea5a","Type":"ContainerStarted","Data":"2db6fb0d9756a7537c2816ec341008d7b493b3a4c0ad299afe1f4ec5795d8c6c"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.719502 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" event={"ID":"30697403-66e5-4f68-8e2f-804017bd9d71","Type":"ContainerStarted","Data":"86c7d034ba551e67a3168224247007b40f929e6bb1ea121361fae6fadf6d5a7d"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.720365 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.722704 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.222683829 +0000 UTC m=+188.671379225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.741626 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" podStartSLOduration=138.741611962 podStartE2EDuration="2m18.741611962s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.741050366 +0000 UTC m=+188.189745762" watchObservedRunningTime="2026-03-12 13:12:49.741611962 +0000 UTC m=+188.190307358" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.759412 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" event={"ID":"658a38d7-a172-432e-a612-6e8cf83f17a2","Type":"ContainerStarted","Data":"b881df9a25cfeae23f99dc528674ad3847aed0b982acf84c8642ced679390821"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.780546 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" event={"ID":"f56ab022-7fcd-406c-b308-b8d5f93a8b55","Type":"ContainerStarted","Data":"ef89d12de18273d7eb6b5d650d9d0c0d957c7bf2d5d8bd8e1287fde2496cce7e"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.788461 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" event={"ID":"4d54f13d-85d8-4c95-acef-fcf9f197769a","Type":"ContainerStarted","Data":"96c6462d8fa4784945fd07f101d1b63a31611241c1c54492927684396e35d98f"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.796953 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" event={"ID":"06bbf7b7-3e40-4aa0-a3db-a56897f5488c","Type":"ContainerStarted","Data":"1d59a8a49996a8759d79ab136fc846582c037b05e8fb37ea5f281de757e513e2"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.797607 4778 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pgrb5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.797663 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" podUID="06bbf7b7-3e40-4aa0-a3db-a56897f5488c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.809224 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" event={"ID":"84bb574a-c91e-4720-83c6-6c47c9344ad2","Type":"ContainerStarted","Data":"dadd3ae69fc3cee25e1016883a7bb995da4c72271918c4c1b226fd8ae2560348"} Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.818111 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.818524 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q677m" podStartSLOduration=138.818494026 podStartE2EDuration="2m18.818494026s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.81756044 +0000 UTC m=+188.266255846" watchObservedRunningTime="2026-03-12 13:12:49.818494026 +0000 UTC m=+188.267189442" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.823112 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.823547 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.323534698 +0000 UTC m=+188.772230094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.897914 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" podStartSLOduration=138.897896581 podStartE2EDuration="2m18.897896581s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.895455712 +0000 UTC m=+188.344151108" watchObservedRunningTime="2026-03-12 13:12:49.897896581 +0000 UTC m=+188.346591977" Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.924136 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:49 crc kubenswrapper[4778]: E0312 13:12:49.926831 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.426805895 +0000 UTC m=+188.875501291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:49 crc kubenswrapper[4778]: I0312 13:12:49.970932 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-mtlvl" podStartSLOduration=138.970913417 podStartE2EDuration="2m18.970913417s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.926116506 +0000 UTC m=+188.374811902" watchObservedRunningTime="2026-03-12 13:12:49.970913417 +0000 UTC m=+188.419608803" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.014976 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ms5xq" podStartSLOduration=139.014948446 podStartE2EDuration="2m19.014948446s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:49.972542793 +0000 UTC m=+188.421238189" watchObservedRunningTime="2026-03-12 13:12:50.014948446 +0000 UTC m=+188.463643842" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.027418 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.027696 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.527685595 +0000 UTC m=+188.976380991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.128258 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.128464 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.628440771 +0000 UTC m=+189.077136167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.128911 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.129179 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.629167962 +0000 UTC m=+189.077863348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.231237 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.232584 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.732558582 +0000 UTC m=+189.181253978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.349558 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.350062 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.850046369 +0000 UTC m=+189.298741765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.451466 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.452079 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:50.952058851 +0000 UTC m=+189.400754247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.508345 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.513904 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.513965 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.555400 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.555766 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.055711649 +0000 UTC m=+189.504407035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.656949 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.657630 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.157613747 +0000 UTC m=+189.606309143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.758453 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.758746 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.258736014 +0000 UTC m=+189.707431410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.821085 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37500: no serving certificate available for the kubelet" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.839267 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" event={"ID":"658a38d7-a172-432e-a612-6e8cf83f17a2","Type":"ContainerStarted","Data":"1a94fe45f752a429199972e344bfdc49cc77e2e8aa05a1a5947d14c54d992c55"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.841747 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qf4nv" event={"ID":"d6b107a5-befb-4e43-9aa6-6b66ff686bf0","Type":"ContainerStarted","Data":"62e5d919f802105ea1b3a2039aca2a2150509639e4f2fca11dbcc93d6c21bee0"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.855934 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" event={"ID":"de34cf46-4b6a-4f7a-8225-eb77bec57450","Type":"ContainerStarted","Data":"977bd127f0730bf16814424d6cb4d6c6fccd07e2e253f06e4bf87e29251d2276"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.859770 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" event={"ID":"22194d8c-315e-46b9-a23b-daab9d020ce4","Type":"ContainerStarted","Data":"bc366c6bd3c76384acb924b41a3ec766ea09a641f3808e42f42e2bfc77dd7ebb"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.860657 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.861058 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.361029844 +0000 UTC m=+189.809725320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.863667 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8zmxq" event={"ID":"138bb189-6182-4210-91a7-140f93f36f81","Type":"ContainerStarted","Data":"d44db0e757aeb75338ebe9a677f885286a6821eb8a7260b30e8ada0524b14a29"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.873560 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" event={"ID":"af58c501-1c93-4f7a-bdf9-1255879aea5a","Type":"ContainerStarted","Data":"f2055f809e508fa781aa94aa5bc3f6dc6240d0cc87c21dca861f82b62915db00"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.888654 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" event={"ID":"a240fd7b-5854-4548-a847-e5590111964b","Type":"ContainerStarted","Data":"6e9a4135f2199a3918c9a565e1055b2ed771be6904f7c3aed074108524b55a58"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.894979 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mx6kn" event={"ID":"8af48f77-25f7-49ca-8bcb-2481aa72ee66","Type":"ContainerStarted","Data":"3f895f754eef0bb8583d87125c78a2d88dff6c4008a74e09e3b011b8af8c89ff"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.895290 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mx6kn" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.897000 4778 patch_prober.go:28] interesting pod/downloads-7954f5f757-mx6kn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.897268 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mx6kn" podUID="8af48f77-25f7-49ca-8bcb-2481aa72ee66" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.954706 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bdcvl" podStartSLOduration=139.95468603 podStartE2EDuration="2m19.95468603s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:50.882865908 +0000 UTC m=+189.331561304" watchObservedRunningTime="2026-03-12 13:12:50.95468603 +0000 UTC m=+189.403381426" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.955451 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sskj6" podStartSLOduration=139.955443941 podStartE2EDuration="2m19.955443941s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:50.945942404 +0000 UTC m=+189.394637800" watchObservedRunningTime="2026-03-12 13:12:50.955443941 +0000 UTC m=+189.404139337" Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.962420 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:50 crc kubenswrapper[4778]: E0312 13:12:50.964202 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.464173137 +0000 UTC m=+189.912868533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.982321 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" event={"ID":"a875bbd5-0126-4d1c-8b7e-97ac32863981","Type":"ContainerStarted","Data":"2a6f6743e07a93e70f829bc549f2d33e714e88c1eb288f701ddc0ccd95329306"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.982370 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" event={"ID":"a875bbd5-0126-4d1c-8b7e-97ac32863981","Type":"ContainerStarted","Data":"f54de55cfebc9535d07929914957e08a5c6384e4086bd842202052c04c36e88d"} Mar 12 13:12:50 crc kubenswrapper[4778]: I0312 13:12:50.982769 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37502: no serving certificate available for the kubelet" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.023916 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wxkb2" podStartSLOduration=140.023898109 podStartE2EDuration="2m20.023898109s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.022463768 +0000 UTC m=+189.471159164" watchObservedRunningTime="2026-03-12 13:12:51.023898109 +0000 UTC m=+189.472593505" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.043795 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qf4nv" podStartSLOduration=7.043776368 podStartE2EDuration="7.043776368s" podCreationTimestamp="2026-03-12 13:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:50.981963648 +0000 UTC m=+189.430659044" watchObservedRunningTime="2026-03-12 13:12:51.043776368 +0000 UTC m=+189.492471764" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.045837 4778 generic.go:334] "Generic (PLEG): container finished" podID="f56ab022-7fcd-406c-b308-b8d5f93a8b55" containerID="ef89d12de18273d7eb6b5d650d9d0c0d957c7bf2d5d8bd8e1287fde2496cce7e" exitCode=0 Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.046112 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" event={"ID":"f56ab022-7fcd-406c-b308-b8d5f93a8b55","Type":"ContainerDied","Data":"ef89d12de18273d7eb6b5d650d9d0c0d957c7bf2d5d8bd8e1287fde2496cce7e"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.063244 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.075215 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.575179842 +0000 UTC m=+190.023875228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.078606 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.090568 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37510: no serving certificate available for the kubelet" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.090883 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" event={"ID":"30ff941c-3c4b-4229-af5a-78bb244a385b","Type":"ContainerStarted","Data":"736a8018c349cf2341e6f4eb94afbf0f3171b64da1d22d244119a823279f1073"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.090929 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" event={"ID":"30ff941c-3c4b-4229-af5a-78bb244a385b","Type":"ContainerStarted","Data":"51a7a169aacb978114e1d382311c332111172d449603d79426b4fbb9d6a05461"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.106834 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" event={"ID":"c320d1aa-c376-41f2-ac5a-8432120b68e0","Type":"ContainerStarted","Data":"6ac8ee3b3ac245da5d15d00bd6d7d2189b809610a626bbb976c1ed31a09619c0"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.106889 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" event={"ID":"c320d1aa-c376-41f2-ac5a-8432120b68e0","Type":"ContainerStarted","Data":"94fc500339d319a3749865db6827f6e083f34ce60814afcc4595ae3cf7706b0f"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.107282 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.114853 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" podStartSLOduration=140.114825718 podStartE2EDuration="2m20.114825718s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.062615208 +0000 UTC m=+189.511310614" watchObservedRunningTime="2026-03-12 13:12:51.114825718 +0000 UTC m=+189.563521114" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.120827 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-r2r62" podStartSLOduration=140.120798706 podStartE2EDuration="2m20.120798706s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.102780229 +0000 UTC m=+189.551475625" watchObservedRunningTime="2026-03-12 13:12:51.120798706 +0000 UTC m=+189.569494102" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.128998 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mx6kn" podStartSLOduration=140.128981707 podStartE2EDuration="2m20.128981707s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.127469214 +0000 UTC m=+189.576164610" watchObservedRunningTime="2026-03-12 13:12:51.128981707 +0000 UTC m=+189.577677103" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.141533 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" event={"ID":"f1f25dae-f3e4-481d-8451-4851b60b2ec4","Type":"ContainerStarted","Data":"871c9b8ae46dc5f00029ac0f63b85dcdfbb540de745b55b8fad5197a886b2a5c"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.158244 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" podStartSLOduration=140.158169378 podStartE2EDuration="2m20.158169378s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.15751436 +0000 UTC m=+189.606209756" watchObservedRunningTime="2026-03-12 13:12:51.158169378 +0000 UTC m=+189.606864784" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.169410 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.169976 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.66996306 +0000 UTC m=+190.118658456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.187542 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37516: no serving certificate available for the kubelet" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.211806 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" podStartSLOduration=140.211789548 podStartE2EDuration="2m20.211789548s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.211227552 +0000 UTC m=+189.659922938" watchObservedRunningTime="2026-03-12 13:12:51.211789548 +0000 UTC m=+189.660484944" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.212359 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x26ck" podStartSLOduration=140.212352264 podStartE2EDuration="2m20.212352264s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.188036339 +0000 UTC m=+189.636731735" watchObservedRunningTime="2026-03-12 13:12:51.212352264 +0000 UTC m=+189.661047670" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.230690 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" event={"ID":"5bb00a46-7425-4d14-a10c-779a5036bba6","Type":"ContainerStarted","Data":"f7f91d4ef783aecf99698d30f7a9029765a79e8641baa462e35a9190c654959c"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.231572 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" podStartSLOduration=140.231548194 podStartE2EDuration="2m20.231548194s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.231346528 +0000 UTC m=+189.680041924" watchObservedRunningTime="2026-03-12 13:12:51.231548194 +0000 UTC m=+189.680243590" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.239435 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" event={"ID":"8f4e3ccc-83e5-40ae-bac2-a5bb1362a531","Type":"ContainerStarted","Data":"9abf18e22758bd2509c1b7021d9a9592da359893bd8606322d57431f2b9769fe"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.251976 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" event={"ID":"3373fbdf-245c-4e98-8bd7-7ad30eb98d76","Type":"ContainerStarted","Data":"3463087063cc705e03cb9086a460b8a9bd82be3b339b0c6111d89b673100c8da"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.252591 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.253366 4778 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mfjpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.253403 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" podUID="3373fbdf-245c-4e98-8bd7-7ad30eb98d76" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.269642 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d562t" event={"ID":"b7a887dd-1794-4d66-90a6-299512f32bd1","Type":"ContainerStarted","Data":"c6418e7bd558b405c93379d97cd1366e81f10d02abe7418779ab78424b4846cd"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.275071 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.276391 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.776373116 +0000 UTC m=+190.225068522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.280287 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" podStartSLOduration=140.280259335 podStartE2EDuration="2m20.280259335s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.277214219 +0000 UTC m=+189.725909615" watchObservedRunningTime="2026-03-12 13:12:51.280259335 +0000 UTC m=+189.728954731" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.299095 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" event={"ID":"a0d33ee6-3a31-4464-b401-7469bf04d240","Type":"ContainerStarted","Data":"bc86fb5453be0badafca787854e5384c3f461fa225c7b32e2c3dcc0425f53429"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.305645 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37528: no serving certificate available for the kubelet" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.308026 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-d562t" podStartSLOduration=7.308008446 podStartE2EDuration="7.308008446s" podCreationTimestamp="2026-03-12 13:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.30601232 +0000 UTC m=+189.754707716" watchObservedRunningTime="2026-03-12 13:12:51.308008446 +0000 UTC m=+189.756703842" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.336497 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" event={"ID":"486a990d-7a56-4eea-a44d-d05a412718c2","Type":"ContainerStarted","Data":"14d7b91c05684bc0c7f86d13f791bcbc70c688ba76edaf092234eac29a1a59d0"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.353886 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k6dcl" podStartSLOduration=140.353867527 podStartE2EDuration="2m20.353867527s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.35360511 +0000 UTC m=+189.802300506" watchObservedRunningTime="2026-03-12 13:12:51.353867527 +0000 UTC m=+189.802562923" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.354259 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" podStartSLOduration=140.354253678 podStartE2EDuration="2m20.354253678s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.331267781 +0000 UTC m=+189.779963177" watchObservedRunningTime="2026-03-12 13:12:51.354253678 +0000 UTC m=+189.802949074" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.366231 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" event={"ID":"5b3e2f12-fdec-46e9-82b4-6777c07281c6","Type":"ContainerStarted","Data":"4727c38ef7af2e309805688aef4d4b418cfbd9b0c2cc9738353eb5971d381c94"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.367277 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.370479 4778 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5m8sg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.370525 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" podUID="5b3e2f12-fdec-46e9-82b4-6777c07281c6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.381297 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.381409 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dqxml" podStartSLOduration=140.381389162 podStartE2EDuration="2m20.381389162s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.381081763 +0000 UTC m=+189.829777159" watchObservedRunningTime="2026-03-12 13:12:51.381389162 +0000 UTC m=+189.830084548" Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.386982 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.886953779 +0000 UTC m=+190.335649255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.396231 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" event={"ID":"32bf6158-393f-4423-9255-345581ec5bf1","Type":"ContainerStarted","Data":"0157c9072ec393da7ee9f835843adb2c73eae8a6c1edd543879099d94372fcea"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.404094 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" podStartSLOduration=140.404077701 podStartE2EDuration="2m20.404077701s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.402233389 +0000 UTC m=+189.850928805" watchObservedRunningTime="2026-03-12 13:12:51.404077701 +0000 UTC m=+189.852773097" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.404147 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" event={"ID":"12abcb2c-895a-46af-9c26-66e358259ce9","Type":"ContainerStarted","Data":"e3aab9f9d0a4b329b434850484ecbf71bf33b51d3d3868abd9c9a08033d2cf6c"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.405494 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.413145 4778 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xcfg6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.413210 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" podUID="12abcb2c-895a-46af-9c26-66e358259ce9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.429288 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" event={"ID":"30697403-66e5-4f68-8e2f-804017bd9d71","Type":"ContainerStarted","Data":"d134eddf0f2d2f7ea60b92781b8038b8007282fa3e0456bea6bd85255a9eb032"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.450728 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" event={"ID":"0ac3e8bc-e165-45d4-8c32-1ccda9769857","Type":"ContainerStarted","Data":"f70ecf9dd646270d97ee1c3ccacc37cbf0166dc7139bb6bbe80179622d36c12e"} Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.455597 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37536: no serving certificate available for the kubelet" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.470166 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" podStartSLOduration=140.470144351 podStartE2EDuration="2m20.470144351s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.467586288 +0000 UTC m=+189.916281684" watchObservedRunningTime="2026-03-12 13:12:51.470144351 +0000 UTC m=+189.918839757" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.470619 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pg48j" podStartSLOduration=140.470610593 podStartE2EDuration="2m20.470610593s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.430071062 +0000 UTC m=+189.878766458" watchObservedRunningTime="2026-03-12 13:12:51.470610593 +0000 UTC m=+189.919305989" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.481945 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.482725 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:51.982710823 +0000 UTC m=+190.431406219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.493934 4778 patch_prober.go:28] interesting pod/console-operator-58897d9998-ww8lt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.493986 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" podUID="f57a417a-5175-4210-98a0-69e579c22e14" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.505027 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.505137 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.506548 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.511816 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 13:12:51 crc kubenswrapper[4778]: [-]has-synced failed: reason withheld Mar 12 13:12:51 crc kubenswrapper[4778]: [+]process-running ok Mar 12 13:12:51 crc kubenswrapper[4778]: healthz check failed Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.511862 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.575996 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qtkq6" podStartSLOduration=140.575977729 podStartE2EDuration="2m20.575977729s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.522797972 +0000 UTC m=+189.971493378" watchObservedRunningTime="2026-03-12 13:12:51.575977729 +0000 UTC m=+190.024673115" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.577447 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vpp8t" podStartSLOduration=140.5774407 podStartE2EDuration="2m20.5774407s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.574712833 +0000 UTC m=+190.023408239" watchObservedRunningTime="2026-03-12 13:12:51.5774407 +0000 UTC m=+190.026136086" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.583721 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.586114 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.086098254 +0000 UTC m=+190.534793750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.615264 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" podStartSLOduration=140.615242964 podStartE2EDuration="2m20.615242964s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.614788201 +0000 UTC m=+190.063483597" watchObservedRunningTime="2026-03-12 13:12:51.615242964 +0000 UTC m=+190.063938370" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.691455 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.691661 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.191640225 +0000 UTC m=+190.640335621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.692306 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.692838 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.192820458 +0000 UTC m=+190.641515854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.701693 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37550: no serving certificate available for the kubelet" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.731380 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" podStartSLOduration=140.731365113 podStartE2EDuration="2m20.731365113s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:51.73055702 +0000 UTC m=+190.179252426" watchObservedRunningTime="2026-03-12 13:12:51.731365113 +0000 UTC m=+190.180060509" Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.794803 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.795298 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.295277772 +0000 UTC m=+190.743973168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.896078 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.896447 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.39643445 +0000 UTC m=+190.845129846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:51 crc kubenswrapper[4778]: I0312 13:12:51.996648 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:51 crc kubenswrapper[4778]: E0312 13:12:51.996975 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.49696092 +0000 UTC m=+190.945656316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.062395 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37566: no serving certificate available for the kubelet" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.098367 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.098681 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.598670163 +0000 UTC m=+191.047365559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.188109 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.188160 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.189119 4778 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-vnndl container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.189156 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" podUID="5bb00a46-7425-4d14-a10c-779a5036bba6" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.199133 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.199490 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.69946677 +0000 UTC m=+191.148162166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.301268 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.301602 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.801587485 +0000 UTC m=+191.250282881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.401961 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.402118 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.902094744 +0000 UTC m=+191.350790140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.402335 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.402632 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:52.902619539 +0000 UTC m=+191.351314925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.502989 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.503372 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.003356865 +0000 UTC m=+191.452052261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.519710 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 13:12:52 crc kubenswrapper[4778]: [-]has-synced failed: reason withheld Mar 12 13:12:52 crc kubenswrapper[4778]: [+]process-running ok Mar 12 13:12:52 crc kubenswrapper[4778]: healthz check failed Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.519770 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.525437 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" event={"ID":"f56ab022-7fcd-406c-b308-b8d5f93a8b55","Type":"ContainerStarted","Data":"2a2fbf21962ed8b3c22585330718ea835e92053ee374a04976de65692706dcd7"} Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.554654 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" event={"ID":"de34cf46-4b6a-4f7a-8225-eb77bec57450","Type":"ContainerStarted","Data":"401b5703c41acfddd23d014f272ac8f6b1388f62ecea38307a5473a536d9b0d6"} Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.574834 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" event={"ID":"4d54f13d-85d8-4c95-acef-fcf9f197769a","Type":"ContainerStarted","Data":"80f2cfadac8d6b03ff7892bf24e4883f713e9acbbedfe471377dd22adfc37de7"} Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.574887 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-srhvx" event={"ID":"4d54f13d-85d8-4c95-acef-fcf9f197769a","Type":"ContainerStarted","Data":"d466ad97054be4ed37f9fb8b3ec96f64dde9ee13ebda4c3eabdc8fba39283309"} Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.586171 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8zmxq" event={"ID":"138bb189-6182-4210-91a7-140f93f36f81","Type":"ContainerStarted","Data":"77c33b0ef2ccc7fb4f73af4b872371447a693d9b8d223a80bae6ca75eb6b33b9"} Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.586446 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8zmxq" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.590465 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" event={"ID":"2be5b8df-aaff-4a2b-9b54-78a7e58bc420","Type":"ContainerStarted","Data":"f95ce1da1eda4c3816899c45ffc12f805515fb791f219e8cfde75c837042966f"} Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.596199 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" podStartSLOduration=141.596171628 podStartE2EDuration="2m21.596171628s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:52.595126988 +0000 UTC m=+191.043822394" watchObservedRunningTime="2026-03-12 13:12:52.596171628 +0000 UTC m=+191.044867024" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.604830 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.605309 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.105292134 +0000 UTC m=+191.553987520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.617321 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-98lbj" event={"ID":"f1f25dae-f3e4-481d-8451-4851b60b2ec4","Type":"ContainerStarted","Data":"51cb6200b8285b0484a8c1780537acf8b2d6e7ed966382ba2545a9eb663773f3"} Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.630066 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8zmxq" podStartSLOduration=8.630051731 podStartE2EDuration="8.630051731s" podCreationTimestamp="2026-03-12 13:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:52.628610021 +0000 UTC m=+191.077305437" watchObservedRunningTime="2026-03-12 13:12:52.630051731 +0000 UTC m=+191.078747127" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.643150 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2z5gg" event={"ID":"486a990d-7a56-4eea-a44d-d05a412718c2","Type":"ContainerStarted","Data":"4b2262b04363b849751ba177591dd70f0df5ed809dd0d98e5777ad3e10a0be6c"} Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.647179 4778 patch_prober.go:28] interesting pod/downloads-7954f5f757-mx6kn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.647279 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mx6kn" podUID="8af48f77-25f7-49ca-8bcb-2481aa72ee66" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.657780 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mfjpc" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.662121 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ww8lt" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.668995 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xcfg6" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.705548 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.708655 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.208636333 +0000 UTC m=+191.657331729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.808009 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.811051 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.311036406 +0000 UTC m=+191.759731802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.815645 4778 ???:1] "http: TLS handshake error from 192.168.126.11:37574: no serving certificate available for the kubelet" Mar 12 13:12:52 crc kubenswrapper[4778]: I0312 13:12:52.909192 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:52 crc kubenswrapper[4778]: E0312 13:12:52.909712 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.409691863 +0000 UTC m=+191.858387269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.010413 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.010751 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.510734308 +0000 UTC m=+191.959429704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.110916 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.111458 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.611443763 +0000 UTC m=+192.060139159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.136611 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5m8sg" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.212128 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.212443 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.712430195 +0000 UTC m=+192.161125591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.313675 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.314053 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.814038306 +0000 UTC m=+192.262733702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.374026 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khr6h"] Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.374926 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.380076 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.413425 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khr6h"] Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.418314 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-catalog-content\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.418350 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-utilities\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.418397 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdp9\" (UniqueName: \"kubernetes.io/projected/1d185732-cd6b-44c6-b4db-ee9ade00c683-kube-api-access-zzdp9\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.418462 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.418756 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:53.918743653 +0000 UTC m=+192.367439039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.509915 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 13:12:53 crc kubenswrapper[4778]: [-]has-synced failed: reason withheld Mar 12 13:12:53 crc kubenswrapper[4778]: [+]process-running ok Mar 12 13:12:53 crc kubenswrapper[4778]: healthz check failed Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.509973 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.519530 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.519734 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.019702475 +0000 UTC m=+192.468397871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.519780 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.519831 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-catalog-content\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.519850 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-utilities\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.519889 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzdp9\" (UniqueName: \"kubernetes.io/projected/1d185732-cd6b-44c6-b4db-ee9ade00c683-kube-api-access-zzdp9\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.520212 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.020203559 +0000 UTC m=+192.468898955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.520355 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-catalog-content\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.520407 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-utilities\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.531766 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qx9d8"] Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.532751 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.542408 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.574031 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzdp9\" (UniqueName: \"kubernetes.io/projected/1d185732-cd6b-44c6-b4db-ee9ade00c683-kube-api-access-zzdp9\") pod \"community-operators-khr6h\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.578489 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qx9d8"] Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.620896 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.621074 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mx5\" (UniqueName: \"kubernetes.io/projected/651601bd-18fe-4ca1-9c61-481ca568d022-kube-api-access-n6mx5\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.621124 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-catalog-content\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.621156 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-utilities\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.621324 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.121307765 +0000 UTC m=+192.570003161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.692394 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.701716 4778 generic.go:334] "Generic (PLEG): container finished" podID="a240fd7b-5854-4548-a847-e5590111964b" containerID="6e9a4135f2199a3918c9a565e1055b2ed771be6904f7c3aed074108524b55a58" exitCode=0 Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.701808 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" event={"ID":"a240fd7b-5854-4548-a847-e5590111964b","Type":"ContainerDied","Data":"6e9a4135f2199a3918c9a565e1055b2ed771be6904f7c3aed074108524b55a58"} Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.704696 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sjk9p"] Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.705531 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.721237 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sjk9p"] Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.721945 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mx5\" (UniqueName: \"kubernetes.io/projected/651601bd-18fe-4ca1-9c61-481ca568d022-kube-api-access-n6mx5\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.722012 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-catalog-content\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.722045 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-utilities\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.722071 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.722378 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.22236379 +0000 UTC m=+192.671059186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.722673 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-catalog-content\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.722747 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-utilities\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.739714 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" event={"ID":"2be5b8df-aaff-4a2b-9b54-78a7e58bc420","Type":"ContainerStarted","Data":"c6196405a97553844994349bb1122565b124ed98a010d4bd3c296e91e0e396bc"} Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.766341 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mx5\" (UniqueName: \"kubernetes.io/projected/651601bd-18fe-4ca1-9c61-481ca568d022-kube-api-access-n6mx5\") pod \"certified-operators-qx9d8\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.824714 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.825093 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq7jg\" (UniqueName: \"kubernetes.io/projected/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-kube-api-access-dq7jg\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.825153 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-catalog-content\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.825282 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-utilities\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.825876 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.325861544 +0000 UTC m=+192.774556940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.862587 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.910303 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l8n9b"] Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.922309 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.928877 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq7jg\" (UniqueName: \"kubernetes.io/projected/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-kube-api-access-dq7jg\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.928922 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-catalog-content\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.928965 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-utilities\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.929021 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:53 crc kubenswrapper[4778]: E0312 13:12:53.929309 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.429295185 +0000 UTC m=+192.877990581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.929374 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l8n9b"] Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.929954 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-catalog-content\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.930170 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-utilities\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:53 crc kubenswrapper[4778]: I0312 13:12:53.996791 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq7jg\" (UniqueName: \"kubernetes.io/projected/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-kube-api-access-dq7jg\") pod \"community-operators-sjk9p\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.019026 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.034784 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.034983 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-utilities\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.035036 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-catalog-content\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.035083 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxz76\" (UniqueName: \"kubernetes.io/projected/c27afe2a-3402-49f9-b985-45fe67e40d22-kube-api-access-dxz76\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: E0312 13:12:54.035202 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.535171246 +0000 UTC m=+192.983866642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.063910 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x4bxj" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.141079 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.141415 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxz76\" (UniqueName: \"kubernetes.io/projected/c27afe2a-3402-49f9-b985-45fe67e40d22-kube-api-access-dxz76\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.141535 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-utilities\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.141574 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-catalog-content\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.142032 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-catalog-content\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: E0312 13:12:54.142328 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.642313632 +0000 UTC m=+193.091009028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.142978 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-utilities\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.177405 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khr6h"] Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.178579 4778 ???:1] "http: TLS handshake error from 192.168.126.11:50664: no serving certificate available for the kubelet" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.180811 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxz76\" (UniqueName: \"kubernetes.io/projected/c27afe2a-3402-49f9-b985-45fe67e40d22-kube-api-access-dxz76\") pod \"certified-operators-l8n9b\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: W0312 13:12:54.191323 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d185732_cd6b_44c6_b4db_ee9ade00c683.slice/crio-f4257f2b5ae0b8d1695cb20eed3d7af4ca3c14b5f906e52fd4e46f8237158ff5 WatchSource:0}: Error finding container f4257f2b5ae0b8d1695cb20eed3d7af4ca3c14b5f906e52fd4e46f8237158ff5: Status 404 returned error can't find the container with id f4257f2b5ae0b8d1695cb20eed3d7af4ca3c14b5f906e52fd4e46f8237158ff5 Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.245009 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:54 crc kubenswrapper[4778]: E0312 13:12:54.245456 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.745440935 +0000 UTC m=+193.194136331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.284693 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pgrb5"] Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.284940 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" podUID="06bbf7b7-3e40-4aa0-a3db-a56897f5488c" containerName="controller-manager" containerID="cri-o://1d59a8a49996a8759d79ab136fc846582c037b05e8fb37ea5f281de757e513e2" gracePeriod=30 Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.308553 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.326109 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh"] Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.327653 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" podUID="06ffdff1-2f10-4f38-b7fd-b98e883bbc63" containerName="route-controller-manager" containerID="cri-o://c400bf292471252407338cf73137e8439d1cb8b7e278bdf9b5b3d6aae90e459c" gracePeriod=30 Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.347703 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:54 crc kubenswrapper[4778]: E0312 13:12:54.348068 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.848056764 +0000 UTC m=+193.296752160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.447581 4778 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.454775 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:54 crc kubenswrapper[4778]: E0312 13:12:54.454919 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.954898511 +0000 UTC m=+193.403593907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:54 crc kubenswrapper[4778]: E0312 13:12:54.455382 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-12 13:12:54.955374285 +0000 UTC m=+193.404069671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fxrx4" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.456267 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.461411 4778 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-12T13:12:54.447835992Z","Handler":null,"Name":""} Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.465419 4778 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.465452 4778 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.502049 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qx9d8"] Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.509857 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 13:12:54 crc kubenswrapper[4778]: [-]has-synced failed: reason withheld Mar 12 13:12:54 crc kubenswrapper[4778]: [+]process-running ok Mar 12 13:12:54 crc kubenswrapper[4778]: healthz check failed Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.509897 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.558080 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.570841 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.586732 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sjk9p"] Mar 12 13:12:54 crc kubenswrapper[4778]: W0312 13:12:54.605315 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3fb69e_dd4f_4787_a207_4fe25106f9e7.slice/crio-54d14e24e2014de0b1846a5aa684b84b3bf2783c8e0d47fb26e64cb9f10b0a8d WatchSource:0}: Error finding container 54d14e24e2014de0b1846a5aa684b84b3bf2783c8e0d47fb26e64cb9f10b0a8d: Status 404 returned error can't find the container with id 54d14e24e2014de0b1846a5aa684b84b3bf2783c8e0d47fb26e64cb9f10b0a8d Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.659931 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.665596 4778 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.665626 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.763108 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjk9p" event={"ID":"3b3fb69e-dd4f-4787-a207-4fe25106f9e7","Type":"ContainerStarted","Data":"54d14e24e2014de0b1846a5aa684b84b3bf2783c8e0d47fb26e64cb9f10b0a8d"} Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.770633 4778 generic.go:334] "Generic (PLEG): container finished" podID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerID="05d961ad3b7bd74a33e24a693f2775dd8f5c4483b25df2fe323f0e88cb5ff934" exitCode=0 Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.770851 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khr6h" event={"ID":"1d185732-cd6b-44c6-b4db-ee9ade00c683","Type":"ContainerDied","Data":"05d961ad3b7bd74a33e24a693f2775dd8f5c4483b25df2fe323f0e88cb5ff934"} Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.770916 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khr6h" event={"ID":"1d185732-cd6b-44c6-b4db-ee9ade00c683","Type":"ContainerStarted","Data":"f4257f2b5ae0b8d1695cb20eed3d7af4ca3c14b5f906e52fd4e46f8237158ff5"} Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.775870 4778 generic.go:334] "Generic (PLEG): container finished" podID="06ffdff1-2f10-4f38-b7fd-b98e883bbc63" containerID="c400bf292471252407338cf73137e8439d1cb8b7e278bdf9b5b3d6aae90e459c" exitCode=0 Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.775929 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" event={"ID":"06ffdff1-2f10-4f38-b7fd-b98e883bbc63","Type":"ContainerDied","Data":"c400bf292471252407338cf73137e8439d1cb8b7e278bdf9b5b3d6aae90e459c"} Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.779877 4778 generic.go:334] "Generic (PLEG): container finished" podID="06bbf7b7-3e40-4aa0-a3db-a56897f5488c" containerID="1d59a8a49996a8759d79ab136fc846582c037b05e8fb37ea5f281de757e513e2" exitCode=0 Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.779935 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" event={"ID":"06bbf7b7-3e40-4aa0-a3db-a56897f5488c","Type":"ContainerDied","Data":"1d59a8a49996a8759d79ab136fc846582c037b05e8fb37ea5f281de757e513e2"} Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.784665 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" event={"ID":"2be5b8df-aaff-4a2b-9b54-78a7e58bc420","Type":"ContainerStarted","Data":"7bf4ff3391edb99ce741e5397b58436a4780a99f8d39dcd8b173aff2b9bac9fa"} Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.786329 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx9d8" event={"ID":"651601bd-18fe-4ca1-9c61-481ca568d022","Type":"ContainerStarted","Data":"592ed663fa0a363547ba9675a7740b1982ac31820675fa1bc6b541164ee13dff"} Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.790804 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fxrx4\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.833171 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.833781 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.843724 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.844018 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.873583 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.910079 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.913848 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.957868 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l8n9b"] Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.962263 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.965392 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-config\") pod \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.965447 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k56zt\" (UniqueName: \"kubernetes.io/projected/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-kube-api-access-k56zt\") pod \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.965469 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-serving-cert\") pod \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.965520 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-client-ca\") pod \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\" (UID: \"06ffdff1-2f10-4f38-b7fd-b98e883bbc63\") " Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.965649 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593bf507-3097-460c-aa84-c680a76f3ffe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"593bf507-3097-460c-aa84-c680a76f3ffe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.965666 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593bf507-3097-460c-aa84-c680a76f3ffe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"593bf507-3097-460c-aa84-c680a76f3ffe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.966472 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-config" (OuterVolumeSpecName: "config") pod "06ffdff1-2f10-4f38-b7fd-b98e883bbc63" (UID: "06ffdff1-2f10-4f38-b7fd-b98e883bbc63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.966999 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-client-ca" (OuterVolumeSpecName: "client-ca") pod "06ffdff1-2f10-4f38-b7fd-b98e883bbc63" (UID: "06ffdff1-2f10-4f38-b7fd-b98e883bbc63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.972524 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-kube-api-access-k56zt" (OuterVolumeSpecName: "kube-api-access-k56zt") pod "06ffdff1-2f10-4f38-b7fd-b98e883bbc63" (UID: "06ffdff1-2f10-4f38-b7fd-b98e883bbc63"). InnerVolumeSpecName "kube-api-access-k56zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:12:54 crc kubenswrapper[4778]: I0312 13:12:54.979400 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06ffdff1-2f10-4f38-b7fd-b98e883bbc63" (UID: "06ffdff1-2f10-4f38-b7fd-b98e883bbc63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.066344 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-serving-cert\") pod \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.066643 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-client-ca\") pod \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.066715 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-proxy-ca-bundles\") pod \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.066737 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bl9\" (UniqueName: \"kubernetes.io/projected/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-kube-api-access-c9bl9\") pod \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.066761 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-config\") pod \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\" (UID: \"06bbf7b7-3e40-4aa0-a3db-a56897f5488c\") " Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.067434 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "06bbf7b7-3e40-4aa0-a3db-a56897f5488c" (UID: "06bbf7b7-3e40-4aa0-a3db-a56897f5488c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.067631 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593bf507-3097-460c-aa84-c680a76f3ffe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"593bf507-3097-460c-aa84-c680a76f3ffe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.067655 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593bf507-3097-460c-aa84-c680a76f3ffe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"593bf507-3097-460c-aa84-c680a76f3ffe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.067745 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-client-ca" (OuterVolumeSpecName: "client-ca") pod "06bbf7b7-3e40-4aa0-a3db-a56897f5488c" (UID: "06bbf7b7-3e40-4aa0-a3db-a56897f5488c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.069909 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06bbf7b7-3e40-4aa0-a3db-a56897f5488c" (UID: "06bbf7b7-3e40-4aa0-a3db-a56897f5488c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.069980 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-config" (OuterVolumeSpecName: "config") pod "06bbf7b7-3e40-4aa0-a3db-a56897f5488c" (UID: "06bbf7b7-3e40-4aa0-a3db-a56897f5488c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.070364 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.070525 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.070533 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-kube-api-access-c9bl9" (OuterVolumeSpecName: "kube-api-access-c9bl9") pod "06bbf7b7-3e40-4aa0-a3db-a56897f5488c" (UID: "06bbf7b7-3e40-4aa0-a3db-a56897f5488c"). InnerVolumeSpecName "kube-api-access-c9bl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.070549 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k56zt\" (UniqueName: \"kubernetes.io/projected/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-kube-api-access-k56zt\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.070568 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06ffdff1-2f10-4f38-b7fd-b98e883bbc63-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.070580 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.070469 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593bf507-3097-460c-aa84-c680a76f3ffe-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"593bf507-3097-460c-aa84-c680a76f3ffe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.073072 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.087210 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593bf507-3097-460c-aa84-c680a76f3ffe-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"593bf507-3097-460c-aa84-c680a76f3ffe\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.171601 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a240fd7b-5854-4548-a847-e5590111964b-secret-volume\") pod \"a240fd7b-5854-4548-a847-e5590111964b\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.171656 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88x7r\" (UniqueName: \"kubernetes.io/projected/a240fd7b-5854-4548-a847-e5590111964b-kube-api-access-88x7r\") pod \"a240fd7b-5854-4548-a847-e5590111964b\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.171691 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume\") pod \"a240fd7b-5854-4548-a847-e5590111964b\" (UID: \"a240fd7b-5854-4548-a847-e5590111964b\") " Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.171993 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9bl9\" (UniqueName: \"kubernetes.io/projected/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-kube-api-access-c9bl9\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.172010 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.172018 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.172026 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06bbf7b7-3e40-4aa0-a3db-a56897f5488c-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.172425 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume" (OuterVolumeSpecName: "config-volume") pod "a240fd7b-5854-4548-a847-e5590111964b" (UID: "a240fd7b-5854-4548-a847-e5590111964b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.176539 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a240fd7b-5854-4548-a847-e5590111964b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a240fd7b-5854-4548-a847-e5590111964b" (UID: "a240fd7b-5854-4548-a847-e5590111964b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.182311 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a240fd7b-5854-4548-a847-e5590111964b-kube-api-access-88x7r" (OuterVolumeSpecName: "kube-api-access-88x7r") pod "a240fd7b-5854-4548-a847-e5590111964b" (UID: "a240fd7b-5854-4548-a847-e5590111964b"). InnerVolumeSpecName "kube-api-access-88x7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.192975 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.206995 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm"] Mar 12 13:12:55 crc kubenswrapper[4778]: E0312 13:12:55.207236 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06bbf7b7-3e40-4aa0-a3db-a56897f5488c" containerName="controller-manager" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.207250 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="06bbf7b7-3e40-4aa0-a3db-a56897f5488c" containerName="controller-manager" Mar 12 13:12:55 crc kubenswrapper[4778]: E0312 13:12:55.207267 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ffdff1-2f10-4f38-b7fd-b98e883bbc63" containerName="route-controller-manager" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.207277 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ffdff1-2f10-4f38-b7fd-b98e883bbc63" containerName="route-controller-manager" Mar 12 13:12:55 crc kubenswrapper[4778]: E0312 13:12:55.207288 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a240fd7b-5854-4548-a847-e5590111964b" containerName="collect-profiles" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.207296 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a240fd7b-5854-4548-a847-e5590111964b" containerName="collect-profiles" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.207414 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="06bbf7b7-3e40-4aa0-a3db-a56897f5488c" containerName="controller-manager" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.207433 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a240fd7b-5854-4548-a847-e5590111964b" containerName="collect-profiles" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.207450 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ffdff1-2f10-4f38-b7fd-b98e883bbc63" containerName="route-controller-manager" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.208218 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.220471 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.226980 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.227720 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.232057 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273630 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lwnl\" (UniqueName: \"kubernetes.io/projected/b330900c-c52a-4e88-a2d2-38e34f837004-kube-api-access-4lwnl\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273675 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-client-ca\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273708 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-client-ca\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273724 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-proxy-ca-bundles\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273776 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnmc\" (UniqueName: \"kubernetes.io/projected/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-kube-api-access-2qnmc\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273794 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-config\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273814 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-config\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273843 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330900c-c52a-4e88-a2d2-38e34f837004-serving-cert\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273862 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-serving-cert\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273903 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a240fd7b-5854-4548-a847-e5590111964b-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273913 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88x7r\" (UniqueName: \"kubernetes.io/projected/a240fd7b-5854-4548-a847-e5590111964b-kube-api-access-88x7r\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.273924 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a240fd7b-5854-4548-a847-e5590111964b-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.304781 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8xksl"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.305743 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.311624 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.322363 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xksl"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376085 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-config\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376163 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330900c-c52a-4e88-a2d2-38e34f837004-serving-cert\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376212 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-utilities\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376237 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-serving-cert\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376281 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lwnl\" (UniqueName: \"kubernetes.io/projected/b330900c-c52a-4e88-a2d2-38e34f837004-kube-api-access-4lwnl\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376313 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-client-ca\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376336 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kch8z\" (UniqueName: \"kubernetes.io/projected/de4557b4-7957-47a0-8c42-845be1fa0f32-kube-api-access-kch8z\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376373 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-client-ca\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376403 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-proxy-ca-bundles\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376443 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-catalog-content\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376480 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qnmc\" (UniqueName: \"kubernetes.io/projected/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-kube-api-access-2qnmc\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.376505 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-config\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.378507 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-config\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.379090 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-config\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.382751 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330900c-c52a-4e88-a2d2-38e34f837004-serving-cert\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.383438 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-client-ca\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.384233 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-serving-cert\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.384336 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-proxy-ca-bundles\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.385073 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-client-ca\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.401741 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxrx4"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.407947 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lwnl\" (UniqueName: \"kubernetes.io/projected/b330900c-c52a-4e88-a2d2-38e34f837004-kube-api-access-4lwnl\") pod \"controller-manager-7b56f5b6c6-7q5xm\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.408854 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qnmc\" (UniqueName: \"kubernetes.io/projected/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-kube-api-access-2qnmc\") pod \"route-controller-manager-5f6cfcbfb9-jcsqb\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: W0312 13:12:55.436466 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51ee714f_fb23_4420_9e70_1b3134eea18e.slice/crio-9fdf9bc3368d582f75afb64ef1bd7b59c9e3cd5fe63a9b2265425474dba3a3b4 WatchSource:0}: Error finding container 9fdf9bc3368d582f75afb64ef1bd7b59c9e3cd5fe63a9b2265425474dba3a3b4: Status 404 returned error can't find the container with id 9fdf9bc3368d582f75afb64ef1bd7b59c9e3cd5fe63a9b2265425474dba3a3b4 Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.477883 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-catalog-content\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.477954 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-utilities\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.477994 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kch8z\" (UniqueName: \"kubernetes.io/projected/de4557b4-7957-47a0-8c42-845be1fa0f32-kube-api-access-kch8z\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.479059 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-utilities\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.479084 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-catalog-content\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.493026 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kch8z\" (UniqueName: \"kubernetes.io/projected/de4557b4-7957-47a0-8c42-845be1fa0f32-kube-api-access-kch8z\") pod \"redhat-marketplace-8xksl\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.509048 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 13:12:55 crc kubenswrapper[4778]: [-]has-synced failed: reason withheld Mar 12 13:12:55 crc kubenswrapper[4778]: [+]process-running ok Mar 12 13:12:55 crc kubenswrapper[4778]: healthz check failed Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.509114 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.563606 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.577137 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.633534 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.697915 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rtjz5"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.698832 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.709410 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtjz5"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.719617 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.784263 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-catalog-content\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.784372 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjm7s\" (UniqueName: \"kubernetes.io/projected/b9bef112-9bef-4ce2-abd8-054b4d671658-kube-api-access-gjm7s\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.784439 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-utilities\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.815274 4778 generic.go:334] "Generic (PLEG): container finished" podID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerID="beac9341cf9caf9b2899c0d3555998167e4413386821c255145cfe1b113c1402" exitCode=0 Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.815412 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8n9b" event={"ID":"c27afe2a-3402-49f9-b985-45fe67e40d22","Type":"ContainerDied","Data":"beac9341cf9caf9b2899c0d3555998167e4413386821c255145cfe1b113c1402"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.815549 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8n9b" event={"ID":"c27afe2a-3402-49f9-b985-45fe67e40d22","Type":"ContainerStarted","Data":"f3e464dc52992fdb0f0b53c632c09c98afcb767da1a2f76ffc34b25c53dcb6a3"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.817379 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" event={"ID":"06bbf7b7-3e40-4aa0-a3db-a56897f5488c","Type":"ContainerDied","Data":"baecc290d5904f2078cb76008ee3fad41b6baea1393aa1ce14dba9ed727aca24"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.817405 4778 scope.go:117] "RemoveContainer" containerID="1d59a8a49996a8759d79ab136fc846582c037b05e8fb37ea5f281de757e513e2" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.817486 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pgrb5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.838998 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.857766 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pgrb5"] Mar 12 13:12:55 crc kubenswrapper[4778]: W0312 13:12:55.862211 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb330900c_c52a_4e88_a2d2_38e34f837004.slice/crio-17004ca7182405d5ca539723da4c058b3a73eeda64b060c4ba9deebe55691d77 WatchSource:0}: Error finding container 17004ca7182405d5ca539723da4c058b3a73eeda64b060c4ba9deebe55691d77: Status 404 returned error can't find the container with id 17004ca7182405d5ca539723da4c058b3a73eeda64b060c4ba9deebe55691d77 Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.874136 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pgrb5"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.875106 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" event={"ID":"2be5b8df-aaff-4a2b-9b54-78a7e58bc420","Type":"ContainerStarted","Data":"4810a5582bbaa1f78751f17e1a82841738eba7c02be4b8158216a27251deac87"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.886522 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-catalog-content\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.886611 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjm7s\" (UniqueName: \"kubernetes.io/projected/b9bef112-9bef-4ce2-abd8-054b4d671658-kube-api-access-gjm7s\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.886692 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-utilities\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.887113 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-utilities\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.887413 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-catalog-content\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.892498 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjk9p" event={"ID":"3b3fb69e-dd4f-4787-a207-4fe25106f9e7","Type":"ContainerDied","Data":"abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.892036 4778 generic.go:334] "Generic (PLEG): container finished" podID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerID="abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961" exitCode=0 Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.898559 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" event={"ID":"51ee714f-fb23-4420-9e70-1b3134eea18e","Type":"ContainerStarted","Data":"29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.898600 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" event={"ID":"51ee714f-fb23-4420-9e70-1b3134eea18e","Type":"ContainerStarted","Data":"9fdf9bc3368d582f75afb64ef1bd7b59c9e3cd5fe63a9b2265425474dba3a3b4"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.898791 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.919001 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.919035 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"593bf507-3097-460c-aa84-c680a76f3ffe","Type":"ContainerStarted","Data":"b74dd280126159ecf287aae9295c5a458084d9b654a5ee2ddd9fbd12aaafb12b"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.926081 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjm7s\" (UniqueName: \"kubernetes.io/projected/b9bef112-9bef-4ce2-abd8-054b4d671658-kube-api-access-gjm7s\") pod \"redhat-marketplace-rtjz5\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.949997 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-cp2lw" podStartSLOduration=11.949842583 podStartE2EDuration="11.949842583s" podCreationTimestamp="2026-03-12 13:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:55.918848181 +0000 UTC m=+194.367543577" watchObservedRunningTime="2026-03-12 13:12:55.949842583 +0000 UTC m=+194.398537979" Mar 12 13:12:55 crc kubenswrapper[4778]: W0312 13:12:55.960409 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fa0e405_9e9d_49fc_b2aa_17ca5c529a74.slice/crio-d26df510f14aa09fdd3bb4f8a1ddd8e15d52996d36167c7af13ebde7af2c80d1 WatchSource:0}: Error finding container d26df510f14aa09fdd3bb4f8a1ddd8e15d52996d36167c7af13ebde7af2c80d1: Status 404 returned error can't find the container with id d26df510f14aa09fdd3bb4f8a1ddd8e15d52996d36167c7af13ebde7af2c80d1 Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.960536 4778 generic.go:334] "Generic (PLEG): container finished" podID="651601bd-18fe-4ca1-9c61-481ca568d022" containerID="768c08538cc35f7dca92094b0ee56f8d00acc523e23bc32165393cb6d17f7cd2" exitCode=0 Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.960708 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx9d8" event={"ID":"651601bd-18fe-4ca1-9c61-481ca568d022","Type":"ContainerDied","Data":"768c08538cc35f7dca92094b0ee56f8d00acc523e23bc32165393cb6d17f7cd2"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.965376 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xksl"] Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.970751 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" podStartSLOduration=144.970733311 podStartE2EDuration="2m24.970733311s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:55.970510945 +0000 UTC m=+194.419206351" watchObservedRunningTime="2026-03-12 13:12:55.970733311 +0000 UTC m=+194.419428707" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.972662 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" event={"ID":"06ffdff1-2f10-4f38-b7fd-b98e883bbc63","Type":"ContainerDied","Data":"58650ee0315d5aac50c162f5420d39a44557cb90a0d565bd9b299a8e4ee0251d"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.972722 4778 scope.go:117] "RemoveContainer" containerID="c400bf292471252407338cf73137e8439d1cb8b7e278bdf9b5b3d6aae90e459c" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.972823 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.979927 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" event={"ID":"a240fd7b-5854-4548-a847-e5590111964b","Type":"ContainerDied","Data":"aa7b81ba2e81ec0ae29a489fd430d937b62e78b889977bd807a40c2a99fb3190"} Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.979964 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa7b81ba2e81ec0ae29a489fd430d937b62e78b889977bd807a40c2a99fb3190" Mar 12 13:12:55 crc kubenswrapper[4778]: I0312 13:12:55.979972 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.023428 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.051222 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh"] Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.069986 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zpgxh"] Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.268742 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06bbf7b7-3e40-4aa0-a3db-a56897f5488c" path="/var/lib/kubelet/pods/06bbf7b7-3e40-4aa0-a3db-a56897f5488c/volumes" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.271105 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ffdff1-2f10-4f38-b7fd-b98e883bbc63" path="/var/lib/kubelet/pods/06ffdff1-2f10-4f38-b7fd-b98e883bbc63/volumes" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.273999 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.278325 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtjz5"] Mar 12 13:12:56 crc kubenswrapper[4778]: W0312 13:12:56.286155 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9bef112_9bef_4ce2_abd8_054b4d671658.slice/crio-823af2a7e3b6063a4f30d49b66161c625efcb36bf067f9d539324e41889ea011 WatchSource:0}: Error finding container 823af2a7e3b6063a4f30d49b66161c625efcb36bf067f9d539324e41889ea011: Status 404 returned error can't find the container with id 823af2a7e3b6063a4f30d49b66161c625efcb36bf067f9d539324e41889ea011 Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.509177 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 13:12:56 crc kubenswrapper[4778]: [-]has-synced failed: reason withheld Mar 12 13:12:56 crc kubenswrapper[4778]: [+]process-running ok Mar 12 13:12:56 crc kubenswrapper[4778]: healthz check failed Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.509705 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.690420 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5s5vs"] Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.691513 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.694587 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.738852 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5s5vs"] Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.781604 4778 ???:1] "http: TLS handshake error from 192.168.126.11:50680: no serving certificate available for the kubelet" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.799530 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-utilities\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.799573 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-catalog-content\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.799869 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfz2\" (UniqueName: \"kubernetes.io/projected/f438f2a3-60c0-4554-a49b-030545f8139c-kube-api-access-mpfz2\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.901364 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-catalog-content\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.901494 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpfz2\" (UniqueName: \"kubernetes.io/projected/f438f2a3-60c0-4554-a49b-030545f8139c-kube-api-access-mpfz2\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.901537 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-utilities\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.902384 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-utilities\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.902473 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-catalog-content\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.921386 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpfz2\" (UniqueName: \"kubernetes.io/projected/f438f2a3-60c0-4554-a49b-030545f8139c-kube-api-access-mpfz2\") pod \"redhat-operators-5s5vs\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.992428 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" event={"ID":"b330900c-c52a-4e88-a2d2-38e34f837004","Type":"ContainerStarted","Data":"af0097b4c8ffcf21c4d0f3d542c30c13b992c9bb5a36537354858b4fa3539991"} Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.992483 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" event={"ID":"b330900c-c52a-4e88-a2d2-38e34f837004","Type":"ContainerStarted","Data":"17004ca7182405d5ca539723da4c058b3a73eeda64b060c4ba9deebe55691d77"} Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.992895 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.998002 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.998267 4778 generic.go:334] "Generic (PLEG): container finished" podID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerID="167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252" exitCode=0 Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.998315 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xksl" event={"ID":"de4557b4-7957-47a0-8c42-845be1fa0f32","Type":"ContainerDied","Data":"167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252"} Mar 12 13:12:56 crc kubenswrapper[4778]: I0312 13:12:56.998344 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xksl" event={"ID":"de4557b4-7957-47a0-8c42-845be1fa0f32","Type":"ContainerStarted","Data":"775a67dbf14a4aa00ee320f14ee688f2689c34e66ee23b796f0166af1618f55f"} Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.001806 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" event={"ID":"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74","Type":"ContainerStarted","Data":"e1481cb1e8a9421818bd64f72eed1fe038b53c1df39f76f7438a261e48a535ff"} Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.001849 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" event={"ID":"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74","Type":"ContainerStarted","Data":"d26df510f14aa09fdd3bb4f8a1ddd8e15d52996d36167c7af13ebde7af2c80d1"} Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.002384 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.007138 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.007472 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.013223 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" podStartSLOduration=2.013194977 podStartE2EDuration="2.013194977s" podCreationTimestamp="2026-03-12 13:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:57.011311934 +0000 UTC m=+195.460007350" watchObservedRunningTime="2026-03-12 13:12:57.013194977 +0000 UTC m=+195.461890373" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.016707 4778 generic.go:334] "Generic (PLEG): container finished" podID="593bf507-3097-460c-aa84-c680a76f3ffe" containerID="4b23771e05a4ff737085d226f6bad113bb6e4e0f0e89dfb0d1486ecc9eab431e" exitCode=0 Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.016760 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"593bf507-3097-460c-aa84-c680a76f3ffe","Type":"ContainerDied","Data":"4b23771e05a4ff737085d226f6bad113bb6e4e0f0e89dfb0d1486ecc9eab431e"} Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.019752 4778 generic.go:334] "Generic (PLEG): container finished" podID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerID="3ee91beb1526d7d2135a66716b66577b22ca3756c6f18236717330ab9060a779" exitCode=0 Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.020047 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtjz5" event={"ID":"b9bef112-9bef-4ce2-abd8-054b4d671658","Type":"ContainerDied","Data":"3ee91beb1526d7d2135a66716b66577b22ca3756c6f18236717330ab9060a779"} Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.020078 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtjz5" event={"ID":"b9bef112-9bef-4ce2-abd8-054b4d671658","Type":"ContainerStarted","Data":"823af2a7e3b6063a4f30d49b66161c625efcb36bf067f9d539324e41889ea011"} Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.066997 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.067046 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.087272 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.104618 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" podStartSLOduration=2.1045992 podStartE2EDuration="2.1045992s" podCreationTimestamp="2026-03-12 13:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:57.101426051 +0000 UTC m=+195.550121447" watchObservedRunningTime="2026-03-12 13:12:57.1045992 +0000 UTC m=+195.553294596" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.138726 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-76s88"] Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.141125 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.146479 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76s88"] Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.196110 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.209591 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vnndl" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.214402 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.215862 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.222701 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.223171 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.241065 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.241102 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.242540 4778 patch_prober.go:28] interesting pod/console-f9d7485db-xwwxp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.242572 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xwwxp" podUID="c825022c-79bc-44ae-bc64-ee9614aafe25" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.250755 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.307831 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dc25eb7-f12c-4445-bd35-107ac0c35429-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5dc25eb7-f12c-4445-bd35-107ac0c35429\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.307951 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6lb\" (UniqueName: \"kubernetes.io/projected/34ecd758-517c-455a-939a-7eb6d3546854-kube-api-access-cg6lb\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.308001 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-catalog-content\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.308071 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-utilities\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.308092 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dc25eb7-f12c-4445-bd35-107ac0c35429-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5dc25eb7-f12c-4445-bd35-107ac0c35429\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.400438 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5s5vs"] Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.412084 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6lb\" (UniqueName: \"kubernetes.io/projected/34ecd758-517c-455a-939a-7eb6d3546854-kube-api-access-cg6lb\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.412155 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-catalog-content\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.412224 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-utilities\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.412251 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dc25eb7-f12c-4445-bd35-107ac0c35429-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5dc25eb7-f12c-4445-bd35-107ac0c35429\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.412296 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dc25eb7-f12c-4445-bd35-107ac0c35429-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5dc25eb7-f12c-4445-bd35-107ac0c35429\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.413458 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-utilities\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.413487 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-catalog-content\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.413523 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dc25eb7-f12c-4445-bd35-107ac0c35429-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5dc25eb7-f12c-4445-bd35-107ac0c35429\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.436320 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dc25eb7-f12c-4445-bd35-107ac0c35429-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5dc25eb7-f12c-4445-bd35-107ac0c35429\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.438328 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6lb\" (UniqueName: \"kubernetes.io/projected/34ecd758-517c-455a-939a-7eb6d3546854-kube-api-access-cg6lb\") pod \"redhat-operators-76s88\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.479797 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.479864 4778 patch_prober.go:28] interesting pod/downloads-7954f5f757-mx6kn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.479876 4778 patch_prober.go:28] interesting pod/downloads-7954f5f757-mx6kn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.479892 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mx6kn" podUID="8af48f77-25f7-49ca-8bcb-2481aa72ee66" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.479923 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mx6kn" podUID="8af48f77-25f7-49ca-8bcb-2481aa72ee66" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.504418 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.508361 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 13:12:57 crc kubenswrapper[4778]: [-]has-synced failed: reason withheld Mar 12 13:12:57 crc kubenswrapper[4778]: [+]process-running ok Mar 12 13:12:57 crc kubenswrapper[4778]: healthz check failed Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.508395 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.575423 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.843378 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76s88"] Mar 12 13:12:57 crc kubenswrapper[4778]: W0312 13:12:57.850983 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ecd758_517c_455a_939a_7eb6d3546854.slice/crio-cf68cb478854e264cd59c9ad8e9f3e763498e2e2706254a3b88fc3dd9f22fe4f WatchSource:0}: Error finding container cf68cb478854e264cd59c9ad8e9f3e763498e2e2706254a3b88fc3dd9f22fe4f: Status 404 returned error can't find the container with id cf68cb478854e264cd59c9ad8e9f3e763498e2e2706254a3b88fc3dd9f22fe4f Mar 12 13:12:57 crc kubenswrapper[4778]: I0312 13:12:57.948439 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 12 13:12:57 crc kubenswrapper[4778]: W0312 13:12:57.956836 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5dc25eb7_f12c_4445_bd35_107ac0c35429.slice/crio-877b25ba26cb3a576a6780cb185f284328b74ec31202ca990242a090aee0ba1f WatchSource:0}: Error finding container 877b25ba26cb3a576a6780cb185f284328b74ec31202ca990242a090aee0ba1f: Status 404 returned error can't find the container with id 877b25ba26cb3a576a6780cb185f284328b74ec31202ca990242a090aee0ba1f Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.032923 4778 generic.go:334] "Generic (PLEG): container finished" podID="f438f2a3-60c0-4554-a49b-030545f8139c" containerID="9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e" exitCode=0 Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.033088 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s5vs" event={"ID":"f438f2a3-60c0-4554-a49b-030545f8139c","Type":"ContainerDied","Data":"9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e"} Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.033329 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s5vs" event={"ID":"f438f2a3-60c0-4554-a49b-030545f8139c","Type":"ContainerStarted","Data":"c5e7e785f566d6c012fb07b0778c4b6c15691ef04836f8607417e605e9c6feb5"} Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.039230 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s88" event={"ID":"34ecd758-517c-455a-939a-7eb6d3546854","Type":"ContainerStarted","Data":"5ab6ab1e87e3d9a4f7941a7ab56868950f541c7821fdd08fb7b7e95206f0cb25"} Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.039267 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s88" event={"ID":"34ecd758-517c-455a-939a-7eb6d3546854","Type":"ContainerStarted","Data":"cf68cb478854e264cd59c9ad8e9f3e763498e2e2706254a3b88fc3dd9f22fe4f"} Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.042741 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5dc25eb7-f12c-4445-bd35-107ac0c35429","Type":"ContainerStarted","Data":"877b25ba26cb3a576a6780cb185f284328b74ec31202ca990242a090aee0ba1f"} Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.049891 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xz42x" Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.353965 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.507891 4778 patch_prober.go:28] interesting pod/router-default-5444994796-ms5xq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 12 13:12:58 crc kubenswrapper[4778]: [-]has-synced failed: reason withheld Mar 12 13:12:58 crc kubenswrapper[4778]: [+]process-running ok Mar 12 13:12:58 crc kubenswrapper[4778]: healthz check failed Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.507951 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ms5xq" podUID="5c8d947a-b62b-4eb9-81d7-94530285e8dc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.546717 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593bf507-3097-460c-aa84-c680a76f3ffe-kube-api-access\") pod \"593bf507-3097-460c-aa84-c680a76f3ffe\" (UID: \"593bf507-3097-460c-aa84-c680a76f3ffe\") " Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.546813 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593bf507-3097-460c-aa84-c680a76f3ffe-kubelet-dir\") pod \"593bf507-3097-460c-aa84-c680a76f3ffe\" (UID: \"593bf507-3097-460c-aa84-c680a76f3ffe\") " Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.547243 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/593bf507-3097-460c-aa84-c680a76f3ffe-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "593bf507-3097-460c-aa84-c680a76f3ffe" (UID: "593bf507-3097-460c-aa84-c680a76f3ffe"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.557402 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.557471 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.562430 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593bf507-3097-460c-aa84-c680a76f3ffe-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "593bf507-3097-460c-aa84-c680a76f3ffe" (UID: "593bf507-3097-460c-aa84-c680a76f3ffe"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.648951 4778 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/593bf507-3097-460c-aa84-c680a76f3ffe-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.648980 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/593bf507-3097-460c-aa84-c680a76f3ffe-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 13:12:58 crc kubenswrapper[4778]: I0312 13:12:58.661958 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.058911 4778 generic.go:334] "Generic (PLEG): container finished" podID="34ecd758-517c-455a-939a-7eb6d3546854" containerID="5ab6ab1e87e3d9a4f7941a7ab56868950f541c7821fdd08fb7b7e95206f0cb25" exitCode=0 Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.059200 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s88" event={"ID":"34ecd758-517c-455a-939a-7eb6d3546854","Type":"ContainerDied","Data":"5ab6ab1e87e3d9a4f7941a7ab56868950f541c7821fdd08fb7b7e95206f0cb25"} Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.062017 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5dc25eb7-f12c-4445-bd35-107ac0c35429","Type":"ContainerStarted","Data":"6fd9810922229212229102ca032fc9237ac433c61fc8c7136009fc1f0bbe286e"} Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.063908 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"593bf507-3097-460c-aa84-c680a76f3ffe","Type":"ContainerDied","Data":"b74dd280126159ecf287aae9295c5a458084d9b654a5ee2ddd9fbd12aaafb12b"} Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.063953 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b74dd280126159ecf287aae9295c5a458084d9b654a5ee2ddd9fbd12aaafb12b" Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.064080 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.509682 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.514784 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ms5xq" Mar 12 13:12:59 crc kubenswrapper[4778]: I0312 13:12:59.547347 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.547321391 podStartE2EDuration="2.547321391s" podCreationTimestamp="2026-03-12 13:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:12:59.087080389 +0000 UTC m=+197.535775785" watchObservedRunningTime="2026-03-12 13:12:59.547321391 +0000 UTC m=+197.996016787" Mar 12 13:13:00 crc kubenswrapper[4778]: I0312 13:13:00.088333 4778 generic.go:334] "Generic (PLEG): container finished" podID="5dc25eb7-f12c-4445-bd35-107ac0c35429" containerID="6fd9810922229212229102ca032fc9237ac433c61fc8c7136009fc1f0bbe286e" exitCode=0 Mar 12 13:13:00 crc kubenswrapper[4778]: I0312 13:13:00.088386 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5dc25eb7-f12c-4445-bd35-107ac0c35429","Type":"ContainerDied","Data":"6fd9810922229212229102ca032fc9237ac433c61fc8c7136009fc1f0bbe286e"} Mar 12 13:13:00 crc kubenswrapper[4778]: I0312 13:13:00.143972 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8zmxq" Mar 12 13:13:01 crc kubenswrapper[4778]: I0312 13:13:01.506915 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:13:01 crc kubenswrapper[4778]: I0312 13:13:01.607057 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dc25eb7-f12c-4445-bd35-107ac0c35429-kubelet-dir\") pod \"5dc25eb7-f12c-4445-bd35-107ac0c35429\" (UID: \"5dc25eb7-f12c-4445-bd35-107ac0c35429\") " Mar 12 13:13:01 crc kubenswrapper[4778]: I0312 13:13:01.607176 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dc25eb7-f12c-4445-bd35-107ac0c35429-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5dc25eb7-f12c-4445-bd35-107ac0c35429" (UID: "5dc25eb7-f12c-4445-bd35-107ac0c35429"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:13:01 crc kubenswrapper[4778]: I0312 13:13:01.607289 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dc25eb7-f12c-4445-bd35-107ac0c35429-kube-api-access\") pod \"5dc25eb7-f12c-4445-bd35-107ac0c35429\" (UID: \"5dc25eb7-f12c-4445-bd35-107ac0c35429\") " Mar 12 13:13:01 crc kubenswrapper[4778]: I0312 13:13:01.607641 4778 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dc25eb7-f12c-4445-bd35-107ac0c35429-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:01 crc kubenswrapper[4778]: I0312 13:13:01.624562 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc25eb7-f12c-4445-bd35-107ac0c35429-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5dc25eb7-f12c-4445-bd35-107ac0c35429" (UID: "5dc25eb7-f12c-4445-bd35-107ac0c35429"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:13:01 crc kubenswrapper[4778]: I0312 13:13:01.709039 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dc25eb7-f12c-4445-bd35-107ac0c35429-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:01 crc kubenswrapper[4778]: I0312 13:13:01.920038 4778 ???:1] "http: TLS handshake error from 192.168.126.11:50684: no serving certificate available for the kubelet" Mar 12 13:13:02 crc kubenswrapper[4778]: I0312 13:13:02.054119 4778 ???:1] "http: TLS handshake error from 192.168.126.11:50700: no serving certificate available for the kubelet" Mar 12 13:13:02 crc kubenswrapper[4778]: I0312 13:13:02.122973 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5dc25eb7-f12c-4445-bd35-107ac0c35429","Type":"ContainerDied","Data":"877b25ba26cb3a576a6780cb185f284328b74ec31202ca990242a090aee0ba1f"} Mar 12 13:13:02 crc kubenswrapper[4778]: I0312 13:13:02.123006 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877b25ba26cb3a576a6780cb185f284328b74ec31202ca990242a090aee0ba1f" Mar 12 13:13:02 crc kubenswrapper[4778]: I0312 13:13:02.123043 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.166994 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.167359 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.167423 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.167455 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.168577 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.169274 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.170378 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.180168 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.183742 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.191234 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.191512 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.192524 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.268347 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.270566 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.275703 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.284764 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b59b25a-3acc-4d06-b91d-575f45463520-metrics-certs\") pod \"network-metrics-daemon-rz9vw\" (UID: \"0b59b25a-3acc-4d06-b91d-575f45463520\") " pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.288492 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.305548 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.312884 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rz9vw" Mar 12 13:13:06 crc kubenswrapper[4778]: I0312 13:13:06.467090 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 12 13:13:07 crc kubenswrapper[4778]: I0312 13:13:07.360476 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:13:07 crc kubenswrapper[4778]: I0312 13:13:07.371560 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:13:07 crc kubenswrapper[4778]: I0312 13:13:07.481801 4778 patch_prober.go:28] interesting pod/downloads-7954f5f757-mx6kn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Mar 12 13:13:07 crc kubenswrapper[4778]: I0312 13:13:07.481871 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mx6kn" podUID="8af48f77-25f7-49ca-8bcb-2481aa72ee66" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Mar 12 13:13:07 crc kubenswrapper[4778]: I0312 13:13:07.489423 4778 patch_prober.go:28] interesting pod/downloads-7954f5f757-mx6kn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Mar 12 13:13:07 crc kubenswrapper[4778]: I0312 13:13:07.489485 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mx6kn" podUID="8af48f77-25f7-49ca-8bcb-2481aa72ee66" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Mar 12 13:13:12 crc kubenswrapper[4778]: I0312 13:13:12.184830 4778 ???:1] "http: TLS handshake error from 192.168.126.11:56534: no serving certificate available for the kubelet" Mar 12 13:13:13 crc kubenswrapper[4778]: I0312 13:13:13.831421 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm"] Mar 12 13:13:13 crc kubenswrapper[4778]: I0312 13:13:13.831690 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" podUID="b330900c-c52a-4e88-a2d2-38e34f837004" containerName="controller-manager" containerID="cri-o://af0097b4c8ffcf21c4d0f3d542c30c13b992c9bb5a36537354858b4fa3539991" gracePeriod=30 Mar 12 13:13:13 crc kubenswrapper[4778]: I0312 13:13:13.841103 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb"] Mar 12 13:13:13 crc kubenswrapper[4778]: I0312 13:13:13.841340 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" podUID="1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" containerName="route-controller-manager" containerID="cri-o://e1481cb1e8a9421818bd64f72eed1fe038b53c1df39f76f7438a261e48a535ff" gracePeriod=30 Mar 12 13:13:14 crc kubenswrapper[4778]: I0312 13:13:14.920385 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:13:15 crc kubenswrapper[4778]: I0312 13:13:15.208938 4778 generic.go:334] "Generic (PLEG): container finished" podID="b330900c-c52a-4e88-a2d2-38e34f837004" containerID="af0097b4c8ffcf21c4d0f3d542c30c13b992c9bb5a36537354858b4fa3539991" exitCode=0 Mar 12 13:13:15 crc kubenswrapper[4778]: I0312 13:13:15.209047 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" event={"ID":"b330900c-c52a-4e88-a2d2-38e34f837004","Type":"ContainerDied","Data":"af0097b4c8ffcf21c4d0f3d542c30c13b992c9bb5a36537354858b4fa3539991"} Mar 12 13:13:15 crc kubenswrapper[4778]: I0312 13:13:15.210993 4778 generic.go:334] "Generic (PLEG): container finished" podID="1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" containerID="e1481cb1e8a9421818bd64f72eed1fe038b53c1df39f76f7438a261e48a535ff" exitCode=0 Mar 12 13:13:15 crc kubenswrapper[4778]: I0312 13:13:15.211103 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" event={"ID":"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74","Type":"ContainerDied","Data":"e1481cb1e8a9421818bd64f72eed1fe038b53c1df39f76f7438a261e48a535ff"} Mar 12 13:13:15 crc kubenswrapper[4778]: I0312 13:13:15.565509 4778 patch_prober.go:28] interesting pod/controller-manager-7b56f5b6c6-7q5xm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Mar 12 13:13:15 crc kubenswrapper[4778]: I0312 13:13:15.565577 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" podUID="b330900c-c52a-4e88-a2d2-38e34f837004" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Mar 12 13:13:15 crc kubenswrapper[4778]: I0312 13:13:15.581273 4778 patch_prober.go:28] interesting pod/route-controller-manager-5f6cfcbfb9-jcsqb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 12 13:13:15 crc kubenswrapper[4778]: I0312 13:13:15.581409 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" podUID="1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 12 13:13:17 crc kubenswrapper[4778]: I0312 13:13:17.485641 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mx6kn" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.172839 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.201903 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w"] Mar 12 13:13:26 crc kubenswrapper[4778]: E0312 13:13:26.202089 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" containerName="route-controller-manager" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.202100 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" containerName="route-controller-manager" Mar 12 13:13:26 crc kubenswrapper[4778]: E0312 13:13:26.202114 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc25eb7-f12c-4445-bd35-107ac0c35429" containerName="pruner" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.202119 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc25eb7-f12c-4445-bd35-107ac0c35429" containerName="pruner" Mar 12 13:13:26 crc kubenswrapper[4778]: E0312 13:13:26.202127 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="593bf507-3097-460c-aa84-c680a76f3ffe" containerName="pruner" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.202133 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="593bf507-3097-460c-aa84-c680a76f3ffe" containerName="pruner" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.202240 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" containerName="route-controller-manager" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.202253 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="593bf507-3097-460c-aa84-c680a76f3ffe" containerName="pruner" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.202263 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc25eb7-f12c-4445-bd35-107ac0c35429" containerName="pruner" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.202566 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.207809 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w"] Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.226735 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qnmc\" (UniqueName: \"kubernetes.io/projected/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-kube-api-access-2qnmc\") pod \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.227074 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-serving-cert\") pod \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.227142 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-client-ca\") pod \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.227203 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-config\") pod \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\" (UID: \"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74\") " Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.227364 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85126e1b-2b92-4e55-b847-d55f8b1b387e-serving-cert\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.227415 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6452d\" (UniqueName: \"kubernetes.io/projected/85126e1b-2b92-4e55-b847-d55f8b1b387e-kube-api-access-6452d\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.227472 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-config\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.227498 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-client-ca\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.228149 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-client-ca" (OuterVolumeSpecName: "client-ca") pod "1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" (UID: "1fa0e405-9e9d-49fc-b2aa-17ca5c529a74"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.228286 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-config" (OuterVolumeSpecName: "config") pod "1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" (UID: "1fa0e405-9e9d-49fc-b2aa-17ca5c529a74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.233667 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-kube-api-access-2qnmc" (OuterVolumeSpecName: "kube-api-access-2qnmc") pod "1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" (UID: "1fa0e405-9e9d-49fc-b2aa-17ca5c529a74"). InnerVolumeSpecName "kube-api-access-2qnmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.235540 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" (UID: "1fa0e405-9e9d-49fc-b2aa-17ca5c529a74"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.275537 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" event={"ID":"1fa0e405-9e9d-49fc-b2aa-17ca5c529a74","Type":"ContainerDied","Data":"d26df510f14aa09fdd3bb4f8a1ddd8e15d52996d36167c7af13ebde7af2c80d1"} Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.275586 4778 scope.go:117] "RemoveContainer" containerID="e1481cb1e8a9421818bd64f72eed1fe038b53c1df39f76f7438a261e48a535ff" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.275633 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.298804 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb"] Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.298872 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb"] Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.328572 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85126e1b-2b92-4e55-b847-d55f8b1b387e-serving-cert\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.328668 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6452d\" (UniqueName: \"kubernetes.io/projected/85126e1b-2b92-4e55-b847-d55f8b1b387e-kube-api-access-6452d\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.328724 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-config\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.328743 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-client-ca\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.328776 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qnmc\" (UniqueName: \"kubernetes.io/projected/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-kube-api-access-2qnmc\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.328787 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.328796 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.328805 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.330439 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-client-ca\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.330543 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-config\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.333309 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85126e1b-2b92-4e55-b847-d55f8b1b387e-serving-cert\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.345408 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6452d\" (UniqueName: \"kubernetes.io/projected/85126e1b-2b92-4e55-b847-d55f8b1b387e-kube-api-access-6452d\") pod \"route-controller-manager-887dfdc8b-vh62w\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.561623 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.564496 4778 patch_prober.go:28] interesting pod/controller-manager-7b56f5b6c6-7q5xm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.564545 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" podUID="b330900c-c52a-4e88-a2d2-38e34f837004" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.579243 4778 patch_prober.go:28] interesting pod/route-controller-manager-5f6cfcbfb9-jcsqb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 12 13:13:26 crc kubenswrapper[4778]: I0312 13:13:26.579294 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f6cfcbfb9-jcsqb" podUID="1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 12 13:13:27 crc kubenswrapper[4778]: I0312 13:13:27.332902 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kc7s7" Mar 12 13:13:28 crc kubenswrapper[4778]: I0312 13:13:28.261851 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa0e405-9e9d-49fc-b2aa-17ca5c529a74" path="/var/lib/kubelet/pods/1fa0e405-9e9d-49fc-b2aa-17ca5c529a74/volumes" Mar 12 13:13:28 crc kubenswrapper[4778]: I0312 13:13:28.558236 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:13:28 crc kubenswrapper[4778]: I0312 13:13:28.558365 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.212805 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.213936 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.215992 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.216805 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.232270 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.290887 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3834d547-946f-4567-b68b-5305589c5573-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3834d547-946f-4567-b68b-5305589c5573\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.290971 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3834d547-946f-4567-b68b-5305589c5573-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3834d547-946f-4567-b68b-5305589c5573\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.392321 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3834d547-946f-4567-b68b-5305589c5573-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3834d547-946f-4567-b68b-5305589c5573\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.392401 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3834d547-946f-4567-b68b-5305589c5573-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3834d547-946f-4567-b68b-5305589c5573\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.392581 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3834d547-946f-4567-b68b-5305589c5573-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3834d547-946f-4567-b68b-5305589c5573\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.428278 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3834d547-946f-4567-b68b-5305589c5573-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3834d547-946f-4567-b68b-5305589c5573\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:30 crc kubenswrapper[4778]: I0312 13:13:30.538750 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.853414 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.898595 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d44c8b88d-jx574"] Mar 12 13:13:33 crc kubenswrapper[4778]: E0312 13:13:33.899151 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b330900c-c52a-4e88-a2d2-38e34f837004" containerName="controller-manager" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.899270 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b330900c-c52a-4e88-a2d2-38e34f837004" containerName="controller-manager" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.899474 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b330900c-c52a-4e88-a2d2-38e34f837004" containerName="controller-manager" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.899954 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.912345 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d44c8b88d-jx574"] Mar 12 13:13:33 crc kubenswrapper[4778]: E0312 13:13:33.928170 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 12 13:13:33 crc kubenswrapper[4778]: E0312 13:13:33.928540 4778 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 13:13:33 crc kubenswrapper[4778]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 12 13:13:33 crc kubenswrapper[4778]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b6h7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29555352-q7fvr_openshift-infra(9f210efd-2ac0-4b67-89c5-fcd9f52f6e01): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 12 13:13:33 crc kubenswrapper[4778]: > logger="UnhandledError" Mar 12 13:13:33 crc kubenswrapper[4778]: E0312 13:13:33.930003 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" podUID="9f210efd-2ac0-4b67-89c5-fcd9f52f6e01" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.943765 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-config\") pod \"b330900c-c52a-4e88-a2d2-38e34f837004\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.943870 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330900c-c52a-4e88-a2d2-38e34f837004-serving-cert\") pod \"b330900c-c52a-4e88-a2d2-38e34f837004\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.943949 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-proxy-ca-bundles\") pod \"b330900c-c52a-4e88-a2d2-38e34f837004\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.943967 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lwnl\" (UniqueName: \"kubernetes.io/projected/b330900c-c52a-4e88-a2d2-38e34f837004-kube-api-access-4lwnl\") pod \"b330900c-c52a-4e88-a2d2-38e34f837004\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.943984 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-client-ca\") pod \"b330900c-c52a-4e88-a2d2-38e34f837004\" (UID: \"b330900c-c52a-4e88-a2d2-38e34f837004\") " Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.944134 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwdxl\" (UniqueName: \"kubernetes.io/projected/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-kube-api-access-zwdxl\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.944211 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-serving-cert\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.944233 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-config\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.944261 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-client-ca\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.944303 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-proxy-ca-bundles\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.946142 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b330900c-c52a-4e88-a2d2-38e34f837004" (UID: "b330900c-c52a-4e88-a2d2-38e34f837004"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.948768 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-client-ca" (OuterVolumeSpecName: "client-ca") pod "b330900c-c52a-4e88-a2d2-38e34f837004" (UID: "b330900c-c52a-4e88-a2d2-38e34f837004"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.951505 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-config" (OuterVolumeSpecName: "config") pod "b330900c-c52a-4e88-a2d2-38e34f837004" (UID: "b330900c-c52a-4e88-a2d2-38e34f837004"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.977311 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w"] Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.978899 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b330900c-c52a-4e88-a2d2-38e34f837004-kube-api-access-4lwnl" (OuterVolumeSpecName: "kube-api-access-4lwnl") pod "b330900c-c52a-4e88-a2d2-38e34f837004" (UID: "b330900c-c52a-4e88-a2d2-38e34f837004"). InnerVolumeSpecName "kube-api-access-4lwnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:13:33 crc kubenswrapper[4778]: I0312 13:13:33.980457 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b330900c-c52a-4e88-a2d2-38e34f837004-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b330900c-c52a-4e88-a2d2-38e34f837004" (UID: "b330900c-c52a-4e88-a2d2-38e34f837004"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.045712 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-serving-cert\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046186 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-config\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046249 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-client-ca\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046281 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-proxy-ca-bundles\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046314 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwdxl\" (UniqueName: \"kubernetes.io/projected/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-kube-api-access-zwdxl\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046357 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046367 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lwnl\" (UniqueName: \"kubernetes.io/projected/b330900c-c52a-4e88-a2d2-38e34f837004-kube-api-access-4lwnl\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046379 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046387 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330900c-c52a-4e88-a2d2-38e34f837004-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.046397 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330900c-c52a-4e88-a2d2-38e34f837004-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.048075 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-config\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.048258 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-client-ca\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.049117 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-proxy-ca-bundles\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.049644 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-serving-cert\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.076462 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwdxl\" (UniqueName: \"kubernetes.io/projected/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-kube-api-access-zwdxl\") pod \"controller-manager-d44c8b88d-jx574\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.216169 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.315786 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" event={"ID":"b330900c-c52a-4e88-a2d2-38e34f837004","Type":"ContainerDied","Data":"17004ca7182405d5ca539723da4c058b3a73eeda64b060c4ba9deebe55691d77"} Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.315862 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm" Mar 12 13:13:34 crc kubenswrapper[4778]: E0312 13:13:34.317663 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" podUID="9f210efd-2ac0-4b67-89c5-fcd9f52f6e01" Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.351618 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm"] Mar 12 13:13:34 crc kubenswrapper[4778]: I0312 13:13:34.354915 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b56f5b6c6-7q5xm"] Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.408247 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.408941 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.422411 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.464909 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a868c6a4-19ec-46be-a0af-be25b1049ff3-kube-api-access\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.464967 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.465036 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-var-lock\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.565994 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-var-lock\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.566275 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a868c6a4-19ec-46be-a0af-be25b1049ff3-kube-api-access\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.566116 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-var-lock\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.566315 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.566375 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.600039 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a868c6a4-19ec-46be-a0af-be25b1049ff3-kube-api-access\") pod \"installer-9-crc\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:35 crc kubenswrapper[4778]: I0312 13:13:35.741699 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:13:36 crc kubenswrapper[4778]: I0312 13:13:36.262148 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b330900c-c52a-4e88-a2d2-38e34f837004" path="/var/lib/kubelet/pods/b330900c-c52a-4e88-a2d2-38e34f837004/volumes" Mar 12 13:13:41 crc kubenswrapper[4778]: I0312 13:13:41.815330 4778 scope.go:117] "RemoveContainer" containerID="af0097b4c8ffcf21c4d0f3d542c30c13b992c9bb5a36537354858b4fa3539991" Mar 12 13:13:41 crc kubenswrapper[4778]: E0312 13:13:41.937010 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 12 13:13:41 crc kubenswrapper[4778]: E0312 13:13:41.937471 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cg6lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-76s88_openshift-marketplace(34ecd758-517c-455a-939a-7eb6d3546854): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 13:13:41 crc kubenswrapper[4778]: E0312 13:13:41.938666 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-76s88" podUID="34ecd758-517c-455a-939a-7eb6d3546854" Mar 12 13:13:42 crc kubenswrapper[4778]: I0312 13:13:42.254703 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rz9vw"] Mar 12 13:13:43 crc kubenswrapper[4778]: E0312 13:13:43.306006 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-76s88" podUID="34ecd758-517c-455a-939a-7eb6d3546854" Mar 12 13:13:43 crc kubenswrapper[4778]: E0312 13:13:43.398776 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 12 13:13:43 crc kubenswrapper[4778]: E0312 13:13:43.398950 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kch8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8xksl_openshift-marketplace(de4557b4-7957-47a0-8c42-845be1fa0f32): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 13:13:43 crc kubenswrapper[4778]: E0312 13:13:43.400381 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8xksl" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" Mar 12 13:13:45 crc kubenswrapper[4778]: E0312 13:13:45.064447 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 12 13:13:45 crc kubenswrapper[4778]: E0312 13:13:45.064665 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxz76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-l8n9b_openshift-marketplace(c27afe2a-3402-49f9-b985-45fe67e40d22): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 13:13:45 crc kubenswrapper[4778]: E0312 13:13:45.065856 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-l8n9b" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" Mar 12 13:13:45 crc kubenswrapper[4778]: E0312 13:13:45.273955 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 12 13:13:45 crc kubenswrapper[4778]: E0312 13:13:45.274204 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpfz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5s5vs_openshift-marketplace(f438f2a3-60c0-4554-a49b-030545f8139c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 13:13:45 crc kubenswrapper[4778]: E0312 13:13:45.275848 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5s5vs" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.166654 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8xksl" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.172643 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5s5vs" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.173128 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-l8n9b" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" Mar 12 13:13:46 crc kubenswrapper[4778]: W0312 13:13:46.176325 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d04218a9594ee2e8723cf07ab5927db677dc447d864eaf55e7287a809c797b89 WatchSource:0}: Error finding container d04218a9594ee2e8723cf07ab5927db677dc447d864eaf55e7287a809c797b89: Status 404 returned error can't find the container with id d04218a9594ee2e8723cf07ab5927db677dc447d864eaf55e7287a809c797b89 Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.282708 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.282916 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq7jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sjk9p_openshift-marketplace(3b3fb69e-dd4f-4787-a207-4fe25106f9e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.284677 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sjk9p" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" Mar 12 13:13:46 crc kubenswrapper[4778]: I0312 13:13:46.389602 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b57645a462ca0d3c7886fbff7aa76102f8895a353f74fc78b7de9ff180fc4427"} Mar 12 13:13:46 crc kubenswrapper[4778]: I0312 13:13:46.397904 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" event={"ID":"0b59b25a-3acc-4d06-b91d-575f45463520","Type":"ContainerStarted","Data":"36557cc8ad045b218ec496388cee2d6541a247487ec1e3da23520d22518f4d28"} Mar 12 13:13:46 crc kubenswrapper[4778]: I0312 13:13:46.400642 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e7b95dbba219e1b176f09463e4a0afd5bb966d4ae2a38107f3e099cd2cd3b809"} Mar 12 13:13:46 crc kubenswrapper[4778]: I0312 13:13:46.403577 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d04218a9594ee2e8723cf07ab5927db677dc447d864eaf55e7287a809c797b89"} Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.407026 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sjk9p" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" Mar 12 13:13:46 crc kubenswrapper[4778]: I0312 13:13:46.483667 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 12 13:13:46 crc kubenswrapper[4778]: W0312 13:13:46.500394 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda868c6a4_19ec_46be_a0af_be25b1049ff3.slice/crio-67b06efe996403c5470e41a5f9a62a78fe522b551d7ec62d8302163676162a07 WatchSource:0}: Error finding container 67b06efe996403c5470e41a5f9a62a78fe522b551d7ec62d8302163676162a07: Status 404 returned error can't find the container with id 67b06efe996403c5470e41a5f9a62a78fe522b551d7ec62d8302163676162a07 Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.511434 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.511631 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6mx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qx9d8_openshift-marketplace(651601bd-18fe-4ca1-9c61-481ca568d022): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.514002 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qx9d8" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" Mar 12 13:13:46 crc kubenswrapper[4778]: I0312 13:13:46.544743 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d44c8b88d-jx574"] Mar 12 13:13:46 crc kubenswrapper[4778]: W0312 13:13:46.561487 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fdd5690_0e80_4317_9e3a_8478f09ea1a8.slice/crio-dd0b16a4e92ddcc0e1151ae83fdba0245e8931b7997e602261eaa93e0a982440 WatchSource:0}: Error finding container dd0b16a4e92ddcc0e1151ae83fdba0245e8931b7997e602261eaa93e0a982440: Status 404 returned error can't find the container with id dd0b16a4e92ddcc0e1151ae83fdba0245e8931b7997e602261eaa93e0a982440 Mar 12 13:13:46 crc kubenswrapper[4778]: I0312 13:13:46.634531 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 12 13:13:46 crc kubenswrapper[4778]: I0312 13:13:46.643112 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w"] Mar 12 13:13:46 crc kubenswrapper[4778]: W0312 13:13:46.651970 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3834d547_946f_4567_b68b_5305589c5573.slice/crio-a78d297f108842c664691a098601755963d8c21b84c611f8a3c408f8ad8e233d WatchSource:0}: Error finding container a78d297f108842c664691a098601755963d8c21b84c611f8a3c408f8ad8e233d: Status 404 returned error can't find the container with id a78d297f108842c664691a098601755963d8c21b84c611f8a3c408f8ad8e233d Mar 12 13:13:46 crc kubenswrapper[4778]: W0312 13:13:46.655683 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85126e1b_2b92_4e55_b847_d55f8b1b387e.slice/crio-d09bd42fb521e8e0212e9f4502cde76e14304082c90f8d8f8186fe008442beee WatchSource:0}: Error finding container d09bd42fb521e8e0212e9f4502cde76e14304082c90f8d8f8186fe008442beee: Status 404 returned error can't find the container with id d09bd42fb521e8e0212e9f4502cde76e14304082c90f8d8f8186fe008442beee Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.763983 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.764150 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zzdp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-khr6h_openshift-marketplace(1d185732-cd6b-44c6-b4db-ee9ade00c683): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.765558 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-khr6h" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.874231 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.874389 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjm7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rtjz5_openshift-marketplace(b9bef112-9bef-4ce2-abd8-054b4d671658): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 12 13:13:46 crc kubenswrapper[4778]: E0312 13:13:46.875772 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rtjz5" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.410866 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3834d547-946f-4567-b68b-5305589c5573","Type":"ContainerStarted","Data":"85369356d1ab8bd11065ca773167d0e1195b4fa7747a6ade645e0d61aacbd264"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.411224 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3834d547-946f-4567-b68b-5305589c5573","Type":"ContainerStarted","Data":"a78d297f108842c664691a098601755963d8c21b84c611f8a3c408f8ad8e233d"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.412117 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c86f994ca77821cf425217729aab1099a7f4898a6c96f51e520a438bc332ecdd"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.412151 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.413753 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" event={"ID":"0b59b25a-3acc-4d06-b91d-575f45463520","Type":"ContainerStarted","Data":"ed9b865a7a64df4df0449be2c8d2e0dba2b2e0d5df9dac000ddb65d397b37f8b"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.413782 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rz9vw" event={"ID":"0b59b25a-3acc-4d06-b91d-575f45463520","Type":"ContainerStarted","Data":"c9663cee45995acefceb1213bb8b5f6c578e1c342fb22d2b298df0e1a4b6f7ea"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.414721 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a868c6a4-19ec-46be-a0af-be25b1049ff3","Type":"ContainerStarted","Data":"35dc89f42df73eafd54f7518d380b5b4f6934732de9c6dd0209b64b9345aa66c"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.414751 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a868c6a4-19ec-46be-a0af-be25b1049ff3","Type":"ContainerStarted","Data":"67b06efe996403c5470e41a5f9a62a78fe522b551d7ec62d8302163676162a07"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.415849 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" event={"ID":"85126e1b-2b92-4e55-b847-d55f8b1b387e","Type":"ContainerStarted","Data":"787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.415879 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" event={"ID":"85126e1b-2b92-4e55-b847-d55f8b1b387e","Type":"ContainerStarted","Data":"d09bd42fb521e8e0212e9f4502cde76e14304082c90f8d8f8186fe008442beee"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.415881 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" podUID="85126e1b-2b92-4e55-b847-d55f8b1b387e" containerName="route-controller-manager" containerID="cri-o://787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b" gracePeriod=30 Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.415994 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.418063 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c2182121d64bdd93ec58bbd5463eeb9f34a0319fc75e80286d85a3e7087bbd62"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.420415 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.425890 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9b1b3fd04d6e452de07ce84a8d13b430669a8c1b705e59aa538d8f1a53cc86b9"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.427466 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" event={"ID":"0fdd5690-0e80-4317-9e3a-8478f09ea1a8","Type":"ContainerStarted","Data":"de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.427493 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" event={"ID":"0fdd5690-0e80-4317-9e3a-8478f09ea1a8","Type":"ContainerStarted","Data":"dd0b16a4e92ddcc0e1151ae83fdba0245e8931b7997e602261eaa93e0a982440"} Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.427711 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:47 crc kubenswrapper[4778]: E0312 13:13:47.428842 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-khr6h" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" Mar 12 13:13:47 crc kubenswrapper[4778]: E0312 13:13:47.428955 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rtjz5" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" Mar 12 13:13:47 crc kubenswrapper[4778]: E0312 13:13:47.431838 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qx9d8" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.432844 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.438856 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=17.438835269 podStartE2EDuration="17.438835269s" podCreationTimestamp="2026-03-12 13:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:13:47.433692824 +0000 UTC m=+245.882388220" watchObservedRunningTime="2026-03-12 13:13:47.438835269 +0000 UTC m=+245.887530665" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.468268 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.468249518 podStartE2EDuration="12.468249518s" podCreationTimestamp="2026-03-12 13:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:13:47.466735215 +0000 UTC m=+245.915430621" watchObservedRunningTime="2026-03-12 13:13:47.468249518 +0000 UTC m=+245.916944924" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.470028 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rz9vw" podStartSLOduration=196.470017948 podStartE2EDuration="3m16.470017948s" podCreationTimestamp="2026-03-12 13:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:13:47.45306149 +0000 UTC m=+245.901756886" watchObservedRunningTime="2026-03-12 13:13:47.470017948 +0000 UTC m=+245.918713344" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.569787 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" podStartSLOduration=34.569762129 podStartE2EDuration="34.569762129s" podCreationTimestamp="2026-03-12 13:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:13:47.56943292 +0000 UTC m=+246.018128316" watchObservedRunningTime="2026-03-12 13:13:47.569762129 +0000 UTC m=+246.018457525" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.614374 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" podStartSLOduration=14.614335035 podStartE2EDuration="14.614335035s" podCreationTimestamp="2026-03-12 13:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:13:47.592892001 +0000 UTC m=+246.041587397" watchObservedRunningTime="2026-03-12 13:13:47.614335035 +0000 UTC m=+246.063030431" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.853328 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.883830 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5"] Mar 12 13:13:47 crc kubenswrapper[4778]: E0312 13:13:47.884075 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85126e1b-2b92-4e55-b847-d55f8b1b387e" containerName="route-controller-manager" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.884087 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="85126e1b-2b92-4e55-b847-d55f8b1b387e" containerName="route-controller-manager" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.884183 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="85126e1b-2b92-4e55-b847-d55f8b1b387e" containerName="route-controller-manager" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.884793 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.895733 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5"] Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950297 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-client-ca\") pod \"85126e1b-2b92-4e55-b847-d55f8b1b387e\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950346 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6452d\" (UniqueName: \"kubernetes.io/projected/85126e1b-2b92-4e55-b847-d55f8b1b387e-kube-api-access-6452d\") pod \"85126e1b-2b92-4e55-b847-d55f8b1b387e\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950416 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-config\") pod \"85126e1b-2b92-4e55-b847-d55f8b1b387e\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950433 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85126e1b-2b92-4e55-b847-d55f8b1b387e-serving-cert\") pod \"85126e1b-2b92-4e55-b847-d55f8b1b387e\" (UID: \"85126e1b-2b92-4e55-b847-d55f8b1b387e\") " Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950596 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/badeb3df-9c56-4aa2-af6f-aba14c213fcc-serving-cert\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950723 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-config\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950752 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsqr\" (UniqueName: \"kubernetes.io/projected/badeb3df-9c56-4aa2-af6f-aba14c213fcc-kube-api-access-tcsqr\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950772 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-client-ca\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.950990 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-client-ca" (OuterVolumeSpecName: "client-ca") pod "85126e1b-2b92-4e55-b847-d55f8b1b387e" (UID: "85126e1b-2b92-4e55-b847-d55f8b1b387e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.951341 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-config" (OuterVolumeSpecName: "config") pod "85126e1b-2b92-4e55-b847-d55f8b1b387e" (UID: "85126e1b-2b92-4e55-b847-d55f8b1b387e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.955534 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85126e1b-2b92-4e55-b847-d55f8b1b387e-kube-api-access-6452d" (OuterVolumeSpecName: "kube-api-access-6452d") pod "85126e1b-2b92-4e55-b847-d55f8b1b387e" (UID: "85126e1b-2b92-4e55-b847-d55f8b1b387e"). InnerVolumeSpecName "kube-api-access-6452d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:13:47 crc kubenswrapper[4778]: I0312 13:13:47.957011 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85126e1b-2b92-4e55-b847-d55f8b1b387e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85126e1b-2b92-4e55-b847-d55f8b1b387e" (UID: "85126e1b-2b92-4e55-b847-d55f8b1b387e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.051791 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcsqr\" (UniqueName: \"kubernetes.io/projected/badeb3df-9c56-4aa2-af6f-aba14c213fcc-kube-api-access-tcsqr\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.051860 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-client-ca\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.051918 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/badeb3df-9c56-4aa2-af6f-aba14c213fcc-serving-cert\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.051970 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-config\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.052013 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.052025 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6452d\" (UniqueName: \"kubernetes.io/projected/85126e1b-2b92-4e55-b847-d55f8b1b387e-kube-api-access-6452d\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.052039 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85126e1b-2b92-4e55-b847-d55f8b1b387e-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.052050 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85126e1b-2b92-4e55-b847-d55f8b1b387e-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.053373 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-config\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.054678 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-client-ca\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.057823 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/badeb3df-9c56-4aa2-af6f-aba14c213fcc-serving-cert\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.075108 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcsqr\" (UniqueName: \"kubernetes.io/projected/badeb3df-9c56-4aa2-af6f-aba14c213fcc-kube-api-access-tcsqr\") pod \"route-controller-manager-6f4dd5cc6-ppsx5\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.207232 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.435075 4778 generic.go:334] "Generic (PLEG): container finished" podID="3834d547-946f-4567-b68b-5305589c5573" containerID="85369356d1ab8bd11065ca773167d0e1195b4fa7747a6ade645e0d61aacbd264" exitCode=0 Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.435148 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3834d547-946f-4567-b68b-5305589c5573","Type":"ContainerDied","Data":"85369356d1ab8bd11065ca773167d0e1195b4fa7747a6ade645e0d61aacbd264"} Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.437481 4778 generic.go:334] "Generic (PLEG): container finished" podID="85126e1b-2b92-4e55-b847-d55f8b1b387e" containerID="787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b" exitCode=0 Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.437572 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.437632 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" event={"ID":"85126e1b-2b92-4e55-b847-d55f8b1b387e","Type":"ContainerDied","Data":"787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b"} Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.437669 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w" event={"ID":"85126e1b-2b92-4e55-b847-d55f8b1b387e","Type":"ContainerDied","Data":"d09bd42fb521e8e0212e9f4502cde76e14304082c90f8d8f8186fe008442beee"} Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.437690 4778 scope.go:117] "RemoveContainer" containerID="787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.465719 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w"] Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.469711 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-887dfdc8b-vh62w"] Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.469945 4778 scope.go:117] "RemoveContainer" containerID="787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b" Mar 12 13:13:48 crc kubenswrapper[4778]: E0312 13:13:48.470449 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b\": container with ID starting with 787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b not found: ID does not exist" containerID="787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.470475 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b"} err="failed to get container status \"787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b\": rpc error: code = NotFound desc = could not find container \"787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b\": container with ID starting with 787bc707ec53d701471888ee4a1be3c5289a94da6f7f14cfd6a0ac8a2a7cbd1b not found: ID does not exist" Mar 12 13:13:48 crc kubenswrapper[4778]: I0312 13:13:48.666504 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5"] Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.180614 4778 csr.go:261] certificate signing request csr-twzvk is approved, waiting to be issued Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.189952 4778 csr.go:257] certificate signing request csr-twzvk is issued Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.447556 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" event={"ID":"badeb3df-9c56-4aa2-af6f-aba14c213fcc","Type":"ContainerStarted","Data":"6edfc1174eae36c4699c23b09d94a6801a70e404d52b1e50d4350988d1f6d371"} Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.447610 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" event={"ID":"badeb3df-9c56-4aa2-af6f-aba14c213fcc","Type":"ContainerStarted","Data":"ad682c14c40bcfdf45bab0f4aae014cbaeacf6b49ffe857ff368861fb7bbc412"} Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.447829 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.450292 4778 generic.go:334] "Generic (PLEG): container finished" podID="9f210efd-2ac0-4b67-89c5-fcd9f52f6e01" containerID="d6a4e00222817c0335bb85eb95073d869a129a695fed4bc12743392acf13e251" exitCode=0 Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.450377 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" event={"ID":"9f210efd-2ac0-4b67-89c5-fcd9f52f6e01","Type":"ContainerDied","Data":"d6a4e00222817c0335bb85eb95073d869a129a695fed4bc12743392acf13e251"} Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.455972 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.487815 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" podStartSLOduration=16.487793128 podStartE2EDuration="16.487793128s" podCreationTimestamp="2026-03-12 13:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:13:49.468025051 +0000 UTC m=+247.916720447" watchObservedRunningTime="2026-03-12 13:13:49.487793128 +0000 UTC m=+247.936488524" Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.765983 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.905602 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3834d547-946f-4567-b68b-5305589c5573-kube-api-access\") pod \"3834d547-946f-4567-b68b-5305589c5573\" (UID: \"3834d547-946f-4567-b68b-5305589c5573\") " Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.905705 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3834d547-946f-4567-b68b-5305589c5573-kubelet-dir\") pod \"3834d547-946f-4567-b68b-5305589c5573\" (UID: \"3834d547-946f-4567-b68b-5305589c5573\") " Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.905806 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3834d547-946f-4567-b68b-5305589c5573-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3834d547-946f-4567-b68b-5305589c5573" (UID: "3834d547-946f-4567-b68b-5305589c5573"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.906153 4778 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3834d547-946f-4567-b68b-5305589c5573-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:49 crc kubenswrapper[4778]: I0312 13:13:49.912489 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3834d547-946f-4567-b68b-5305589c5573-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3834d547-946f-4567-b68b-5305589c5573" (UID: "3834d547-946f-4567-b68b-5305589c5573"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.007842 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3834d547-946f-4567-b68b-5305589c5573-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.191146 4778 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-20 13:35:55.351213448 +0000 UTC Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.191617 4778 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6072h22m5.159692805s for next certificate rotation Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.265313 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85126e1b-2b92-4e55-b847-d55f8b1b387e" path="/var/lib/kubelet/pods/85126e1b-2b92-4e55-b847-d55f8b1b387e/volumes" Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.462868 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.463371 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3834d547-946f-4567-b68b-5305589c5573","Type":"ContainerDied","Data":"a78d297f108842c664691a098601755963d8c21b84c611f8a3c408f8ad8e233d"} Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.463431 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a78d297f108842c664691a098601755963d8c21b84c611f8a3c408f8ad8e233d" Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.717822 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.820481 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6h7h\" (UniqueName: \"kubernetes.io/projected/9f210efd-2ac0-4b67-89c5-fcd9f52f6e01-kube-api-access-b6h7h\") pod \"9f210efd-2ac0-4b67-89c5-fcd9f52f6e01\" (UID: \"9f210efd-2ac0-4b67-89c5-fcd9f52f6e01\") " Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.826292 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f210efd-2ac0-4b67-89c5-fcd9f52f6e01-kube-api-access-b6h7h" (OuterVolumeSpecName: "kube-api-access-b6h7h") pod "9f210efd-2ac0-4b67-89c5-fcd9f52f6e01" (UID: "9f210efd-2ac0-4b67-89c5-fcd9f52f6e01"). InnerVolumeSpecName "kube-api-access-b6h7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:13:50 crc kubenswrapper[4778]: I0312 13:13:50.921884 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6h7h\" (UniqueName: \"kubernetes.io/projected/9f210efd-2ac0-4b67-89c5-fcd9f52f6e01-kube-api-access-b6h7h\") on node \"crc\" DevicePath \"\"" Mar 12 13:13:51 crc kubenswrapper[4778]: I0312 13:13:51.192440 4778 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-13 10:03:21.382508358 +0000 UTC Mar 12 13:13:51 crc kubenswrapper[4778]: I0312 13:13:51.192480 4778 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6620h49m30.190030163s for next certificate rotation Mar 12 13:13:51 crc kubenswrapper[4778]: I0312 13:13:51.469873 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" event={"ID":"9f210efd-2ac0-4b67-89c5-fcd9f52f6e01","Type":"ContainerDied","Data":"0a2c8918cbacef5d63ed30076a63c59219bb878177978f4909e3ed43cb24db19"} Mar 12 13:13:51 crc kubenswrapper[4778]: I0312 13:13:51.469945 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2c8918cbacef5d63ed30076a63c59219bb878177978f4909e3ed43cb24db19" Mar 12 13:13:51 crc kubenswrapper[4778]: I0312 13:13:51.469899 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555352-q7fvr" Mar 12 13:13:58 crc kubenswrapper[4778]: I0312 13:13:58.508385 4778 generic.go:334] "Generic (PLEG): container finished" podID="34ecd758-517c-455a-939a-7eb6d3546854" containerID="e6857324d1a49d08837ab795e083cf8ed33ad61f45f62f385bd7494ef38b2514" exitCode=0 Mar 12 13:13:58 crc kubenswrapper[4778]: I0312 13:13:58.508428 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s88" event={"ID":"34ecd758-517c-455a-939a-7eb6d3546854","Type":"ContainerDied","Data":"e6857324d1a49d08837ab795e083cf8ed33ad61f45f62f385bd7494ef38b2514"} Mar 12 13:13:58 crc kubenswrapper[4778]: I0312 13:13:58.558226 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:13:58 crc kubenswrapper[4778]: I0312 13:13:58.558284 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:13:58 crc kubenswrapper[4778]: I0312 13:13:58.558329 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:13:58 crc kubenswrapper[4778]: I0312 13:13:58.558965 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:13:58 crc kubenswrapper[4778]: I0312 13:13:58.559024 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce" gracePeriod=600 Mar 12 13:13:59 crc kubenswrapper[4778]: I0312 13:13:59.515322 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce" exitCode=0 Mar 12 13:13:59 crc kubenswrapper[4778]: I0312 13:13:59.515469 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce"} Mar 12 13:13:59 crc kubenswrapper[4778]: I0312 13:13:59.515995 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"dcabd48eda797c052967d086d455193bf30a1f05151385a52352d733c58148f7"} Mar 12 13:13:59 crc kubenswrapper[4778]: I0312 13:13:59.526584 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s5vs" event={"ID":"f438f2a3-60c0-4554-a49b-030545f8139c","Type":"ContainerStarted","Data":"ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354"} Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.139629 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555354-n6zvc"] Mar 12 13:14:00 crc kubenswrapper[4778]: E0312 13:14:00.141419 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f210efd-2ac0-4b67-89c5-fcd9f52f6e01" containerName="oc" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.141455 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f210efd-2ac0-4b67-89c5-fcd9f52f6e01" containerName="oc" Mar 12 13:14:00 crc kubenswrapper[4778]: E0312 13:14:00.141478 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3834d547-946f-4567-b68b-5305589c5573" containerName="pruner" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.141491 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3834d547-946f-4567-b68b-5305589c5573" containerName="pruner" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.141696 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f210efd-2ac0-4b67-89c5-fcd9f52f6e01" containerName="oc" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.141716 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3834d547-946f-4567-b68b-5305589c5573" containerName="pruner" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.142284 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555354-n6zvc" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.145456 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555354-n6zvc"] Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.180917 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.181370 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.181542 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.181984 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbzs\" (UniqueName: \"kubernetes.io/projected/f91620d9-a95e-4e74-ab13-531d5e040b50-kube-api-access-vxbzs\") pod \"auto-csr-approver-29555354-n6zvc\" (UID: \"f91620d9-a95e-4e74-ab13-531d5e040b50\") " pod="openshift-infra/auto-csr-approver-29555354-n6zvc" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.283151 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbzs\" (UniqueName: \"kubernetes.io/projected/f91620d9-a95e-4e74-ab13-531d5e040b50-kube-api-access-vxbzs\") pod \"auto-csr-approver-29555354-n6zvc\" (UID: \"f91620d9-a95e-4e74-ab13-531d5e040b50\") " pod="openshift-infra/auto-csr-approver-29555354-n6zvc" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.302803 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbzs\" (UniqueName: \"kubernetes.io/projected/f91620d9-a95e-4e74-ab13-531d5e040b50-kube-api-access-vxbzs\") pod \"auto-csr-approver-29555354-n6zvc\" (UID: \"f91620d9-a95e-4e74-ab13-531d5e040b50\") " pod="openshift-infra/auto-csr-approver-29555354-n6zvc" Mar 12 13:14:00 crc kubenswrapper[4778]: I0312 13:14:00.502091 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555354-n6zvc" Mar 12 13:14:01 crc kubenswrapper[4778]: I0312 13:14:01.291521 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555354-n6zvc"] Mar 12 13:14:01 crc kubenswrapper[4778]: W0312 13:14:01.294094 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf91620d9_a95e_4e74_ab13_531d5e040b50.slice/crio-48edef7d58f43f0f3d3ff5833c8716761e6649acb2b5f5bb2b27f21563634198 WatchSource:0}: Error finding container 48edef7d58f43f0f3d3ff5833c8716761e6649acb2b5f5bb2b27f21563634198: Status 404 returned error can't find the container with id 48edef7d58f43f0f3d3ff5833c8716761e6649acb2b5f5bb2b27f21563634198 Mar 12 13:14:01 crc kubenswrapper[4778]: I0312 13:14:01.546006 4778 generic.go:334] "Generic (PLEG): container finished" podID="f438f2a3-60c0-4554-a49b-030545f8139c" containerID="ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354" exitCode=0 Mar 12 13:14:01 crc kubenswrapper[4778]: I0312 13:14:01.546082 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s5vs" event={"ID":"f438f2a3-60c0-4554-a49b-030545f8139c","Type":"ContainerDied","Data":"ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354"} Mar 12 13:14:01 crc kubenswrapper[4778]: I0312 13:14:01.548382 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555354-n6zvc" event={"ID":"f91620d9-a95e-4e74-ab13-531d5e040b50","Type":"ContainerStarted","Data":"48edef7d58f43f0f3d3ff5833c8716761e6649acb2b5f5bb2b27f21563634198"} Mar 12 13:14:08 crc kubenswrapper[4778]: I0312 13:14:08.593996 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s88" event={"ID":"34ecd758-517c-455a-939a-7eb6d3546854","Type":"ContainerStarted","Data":"1e77f31cb8ac97bbace99ce9835f811074e891b28dabf061e7039bfab7607d57"} Mar 12 13:14:08 crc kubenswrapper[4778]: I0312 13:14:08.616817 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-76s88" podStartSLOduration=2.043036075 podStartE2EDuration="1m11.616781986s" podCreationTimestamp="2026-03-12 13:12:57 +0000 UTC" firstStartedPulling="2026-03-12 13:12:58.043338486 +0000 UTC m=+196.492033882" lastFinishedPulling="2026-03-12 13:14:07.617084387 +0000 UTC m=+266.065779793" observedRunningTime="2026-03-12 13:14:08.612690754 +0000 UTC m=+267.061386150" watchObservedRunningTime="2026-03-12 13:14:08.616781986 +0000 UTC m=+267.065477382" Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.611735 4778 generic.go:334] "Generic (PLEG): container finished" podID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerID="718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba" exitCode=0 Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.611861 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xksl" event={"ID":"de4557b4-7957-47a0-8c42-845be1fa0f32","Type":"ContainerDied","Data":"718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba"} Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.620810 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s5vs" event={"ID":"f438f2a3-60c0-4554-a49b-030545f8139c","Type":"ContainerStarted","Data":"06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd"} Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.625282 4778 generic.go:334] "Generic (PLEG): container finished" podID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerID="fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268" exitCode=0 Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.625279 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjk9p" event={"ID":"3b3fb69e-dd4f-4787-a207-4fe25106f9e7","Type":"ContainerDied","Data":"fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268"} Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.627728 4778 generic.go:334] "Generic (PLEG): container finished" podID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerID="84fe3c954d7e0d1d6303467d2621bf3b31d896882603252deb19491a2fa354ed" exitCode=0 Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.627785 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khr6h" event={"ID":"1d185732-cd6b-44c6-b4db-ee9ade00c683","Type":"ContainerDied","Data":"84fe3c954d7e0d1d6303467d2621bf3b31d896882603252deb19491a2fa354ed"} Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.636152 4778 generic.go:334] "Generic (PLEG): container finished" podID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerID="44212f253b9d8de159bf039fe64dd134b5f7beb71943da6aab7d4efc080466b3" exitCode=0 Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.636582 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtjz5" event={"ID":"b9bef112-9bef-4ce2-abd8-054b4d671658","Type":"ContainerDied","Data":"44212f253b9d8de159bf039fe64dd134b5f7beb71943da6aab7d4efc080466b3"} Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.641725 4778 generic.go:334] "Generic (PLEG): container finished" podID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerID="517c2af638efb950196e9ef53f4578b28c6c02cc9d241b33a72ede0303af599d" exitCode=0 Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.641800 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8n9b" event={"ID":"c27afe2a-3402-49f9-b985-45fe67e40d22","Type":"ContainerDied","Data":"517c2af638efb950196e9ef53f4578b28c6c02cc9d241b33a72ede0303af599d"} Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.650686 4778 generic.go:334] "Generic (PLEG): container finished" podID="f91620d9-a95e-4e74-ab13-531d5e040b50" containerID="6a586e8ffe815ea410f687edd18208ce93300b26a8a15a7f7b6bd8396c76c788" exitCode=0 Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.650872 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555354-n6zvc" event={"ID":"f91620d9-a95e-4e74-ab13-531d5e040b50","Type":"ContainerDied","Data":"6a586e8ffe815ea410f687edd18208ce93300b26a8a15a7f7b6bd8396c76c788"} Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.655240 4778 generic.go:334] "Generic (PLEG): container finished" podID="651601bd-18fe-4ca1-9c61-481ca568d022" containerID="777dcb7d13b3c9f17ff760e883a8a2c8d277b3c6622f9924b38301e80f9b85e9" exitCode=0 Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.655287 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx9d8" event={"ID":"651601bd-18fe-4ca1-9c61-481ca568d022","Type":"ContainerDied","Data":"777dcb7d13b3c9f17ff760e883a8a2c8d277b3c6622f9924b38301e80f9b85e9"} Mar 12 13:14:09 crc kubenswrapper[4778]: I0312 13:14:09.697293 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5s5vs" podStartSLOduration=3.443434404 podStartE2EDuration="1m13.697275483s" podCreationTimestamp="2026-03-12 13:12:56 +0000 UTC" firstStartedPulling="2026-03-12 13:12:58.037477361 +0000 UTC m=+196.486172757" lastFinishedPulling="2026-03-12 13:14:08.29131844 +0000 UTC m=+266.740013836" observedRunningTime="2026-03-12 13:14:09.689799453 +0000 UTC m=+268.138494849" watchObservedRunningTime="2026-03-12 13:14:09.697275483 +0000 UTC m=+268.145970879" Mar 12 13:14:11 crc kubenswrapper[4778]: I0312 13:14:11.032957 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555354-n6zvc" Mar 12 13:14:11 crc kubenswrapper[4778]: I0312 13:14:11.139035 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxbzs\" (UniqueName: \"kubernetes.io/projected/f91620d9-a95e-4e74-ab13-531d5e040b50-kube-api-access-vxbzs\") pod \"f91620d9-a95e-4e74-ab13-531d5e040b50\" (UID: \"f91620d9-a95e-4e74-ab13-531d5e040b50\") " Mar 12 13:14:11 crc kubenswrapper[4778]: I0312 13:14:11.144259 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91620d9-a95e-4e74-ab13-531d5e040b50-kube-api-access-vxbzs" (OuterVolumeSpecName: "kube-api-access-vxbzs") pod "f91620d9-a95e-4e74-ab13-531d5e040b50" (UID: "f91620d9-a95e-4e74-ab13-531d5e040b50"). InnerVolumeSpecName "kube-api-access-vxbzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:14:11 crc kubenswrapper[4778]: I0312 13:14:11.240763 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxbzs\" (UniqueName: \"kubernetes.io/projected/f91620d9-a95e-4e74-ab13-531d5e040b50-kube-api-access-vxbzs\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:11 crc kubenswrapper[4778]: I0312 13:14:11.668143 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555354-n6zvc" event={"ID":"f91620d9-a95e-4e74-ab13-531d5e040b50","Type":"ContainerDied","Data":"48edef7d58f43f0f3d3ff5833c8716761e6649acb2b5f5bb2b27f21563634198"} Mar 12 13:14:11 crc kubenswrapper[4778]: I0312 13:14:11.668200 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555354-n6zvc" Mar 12 13:14:11 crc kubenswrapper[4778]: I0312 13:14:11.668211 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48edef7d58f43f0f3d3ff5833c8716761e6649acb2b5f5bb2b27f21563634198" Mar 12 13:14:14 crc kubenswrapper[4778]: I0312 13:14:14.684429 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xksl" event={"ID":"de4557b4-7957-47a0-8c42-845be1fa0f32","Type":"ContainerStarted","Data":"1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592"} Mar 12 13:14:15 crc kubenswrapper[4778]: I0312 13:14:15.757365 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8xksl" podStartSLOduration=3.613980098 podStartE2EDuration="1m20.757346364s" podCreationTimestamp="2026-03-12 13:12:55 +0000 UTC" firstStartedPulling="2026-03-12 13:12:56.999981705 +0000 UTC m=+195.448677101" lastFinishedPulling="2026-03-12 13:14:14.143347981 +0000 UTC m=+272.592043367" observedRunningTime="2026-03-12 13:14:15.75252867 +0000 UTC m=+274.201224066" watchObservedRunningTime="2026-03-12 13:14:15.757346364 +0000 UTC m=+274.206041760" Mar 12 13:14:16 crc kubenswrapper[4778]: I0312 13:14:16.290718 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 12 13:14:16 crc kubenswrapper[4778]: I0312 13:14:16.811321 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5kw4v"] Mar 12 13:14:17 crc kubenswrapper[4778]: I0312 13:14:17.008409 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:14:17 crc kubenswrapper[4778]: I0312 13:14:17.008459 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:14:17 crc kubenswrapper[4778]: I0312 13:14:17.481048 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:14:17 crc kubenswrapper[4778]: I0312 13:14:17.481086 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:14:17 crc kubenswrapper[4778]: I0312 13:14:17.892943 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:14:17 crc kubenswrapper[4778]: I0312 13:14:17.893021 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:14:17 crc kubenswrapper[4778]: I0312 13:14:17.937911 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:14:17 crc kubenswrapper[4778]: I0312 13:14:17.943305 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:14:19 crc kubenswrapper[4778]: I0312 13:14:19.119830 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76s88"] Mar 12 13:14:19 crc kubenswrapper[4778]: I0312 13:14:19.755325 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-76s88" podUID="34ecd758-517c-455a-939a-7eb6d3546854" containerName="registry-server" containerID="cri-o://1e77f31cb8ac97bbace99ce9835f811074e891b28dabf061e7039bfab7607d57" gracePeriod=2 Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:22.788593 4778 generic.go:334] "Generic (PLEG): container finished" podID="34ecd758-517c-455a-939a-7eb6d3546854" containerID="1e77f31cb8ac97bbace99ce9835f811074e891b28dabf061e7039bfab7607d57" exitCode=0 Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:22.788659 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s88" event={"ID":"34ecd758-517c-455a-939a-7eb6d3546854","Type":"ContainerDied","Data":"1e77f31cb8ac97bbace99ce9835f811074e891b28dabf061e7039bfab7607d57"} Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.440928 4778 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.441526 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91620d9-a95e-4e74-ab13-531d5e040b50" containerName="oc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.441539 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91620d9-a95e-4e74-ab13-531d5e040b50" containerName="oc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.441645 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91620d9-a95e-4e74-ab13-531d5e040b50" containerName="oc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.441959 4778 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442052 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442303 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1" gracePeriod=15 Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442374 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51" gracePeriod=15 Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442383 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0" gracePeriod=15 Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442383 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c" gracePeriod=15 Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442463 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a" gracePeriod=15 Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442776 4778 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.442925 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442938 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.442947 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442954 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.442964 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442971 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.442979 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.442986 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.442997 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443003 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.443016 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443024 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.443033 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443040 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.443047 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443053 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.443064 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443069 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443178 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443216 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443225 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443236 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443248 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443256 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443265 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443273 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: E0312 13:14:24.443420 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443432 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.443548 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.468023 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.550139 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.550260 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.550456 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.550684 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.550868 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.551108 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.551174 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.551318 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653052 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653109 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653130 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653167 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653222 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653243 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653241 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653280 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653159 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653319 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653365 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653403 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653427 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653446 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653482 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.653542 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.765522 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.803960 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.805056 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.805673 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a" exitCode=2 Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.815728 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76s88" event={"ID":"34ecd758-517c-455a-939a-7eb6d3546854","Type":"ContainerDied","Data":"cf68cb478854e264cd59c9ad8e9f3e763498e2e2706254a3b88fc3dd9f22fe4f"} Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.815756 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf68cb478854e264cd59c9ad8e9f3e763498e2e2706254a3b88fc3dd9f22fe4f" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.836931 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.837692 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.837980 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.838489 4778 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.957403 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg6lb\" (UniqueName: \"kubernetes.io/projected/34ecd758-517c-455a-939a-7eb6d3546854-kube-api-access-cg6lb\") pod \"34ecd758-517c-455a-939a-7eb6d3546854\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.957687 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-utilities\") pod \"34ecd758-517c-455a-939a-7eb6d3546854\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.957730 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-catalog-content\") pod \"34ecd758-517c-455a-939a-7eb6d3546854\" (UID: \"34ecd758-517c-455a-939a-7eb6d3546854\") " Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.958431 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-utilities" (OuterVolumeSpecName: "utilities") pod "34ecd758-517c-455a-939a-7eb6d3546854" (UID: "34ecd758-517c-455a-939a-7eb6d3546854"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:14:24 crc kubenswrapper[4778]: I0312 13:14:24.962528 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ecd758-517c-455a-939a-7eb6d3546854-kube-api-access-cg6lb" (OuterVolumeSpecName: "kube-api-access-cg6lb") pod "34ecd758-517c-455a-939a-7eb6d3546854" (UID: "34ecd758-517c-455a-939a-7eb6d3546854"). InnerVolumeSpecName "kube-api-access-cg6lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.059310 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg6lb\" (UniqueName: \"kubernetes.io/projected/34ecd758-517c-455a-939a-7eb6d3546854-kube-api-access-cg6lb\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.059341 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.214757 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34ecd758-517c-455a-939a-7eb6d3546854" (UID: "34ecd758-517c-455a-939a-7eb6d3546854"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.262489 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34ecd758-517c-455a-939a-7eb6d3546854-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.634692 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.636267 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.696338 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.697228 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.697611 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.698009 4778 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.698455 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.821973 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.823241 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.823904 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c" exitCode=0 Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.823930 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0" exitCode=0 Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.823938 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51" exitCode=0 Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.823999 4778 scope.go:117] "RemoveContainer" containerID="14c7f2ade3aac502f0534414554216096b45d4f78e81f8ec213064a6205efdbd" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.824002 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76s88" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.824915 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.825816 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.826328 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.826674 4778 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.837862 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.838261 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.838558 4778 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.838799 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.858696 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.859516 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.859982 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.860456 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: I0312 13:14:25.860834 4778 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:25 crc kubenswrapper[4778]: E0312 13:14:25.884626 4778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.32:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-sjk9p.189c1a4bbda7962d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-sjk9p,UID:3b3fb69e-dd4f-4787-a207-4fe25106f9e7,APIVersion:v1,ResourceVersion:28313,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 16.256s (16.256s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:14:25.883878957 +0000 UTC m=+284.332574353,LastTimestamp:2026-03-12 13:14:25.883878957 +0000 UTC m=+284.332574353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:14:26 crc kubenswrapper[4778]: W0312 13:14:26.193814 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-dd57d9c1e52f398a835a3a8d9c99f0c1ae6b500c97490f43d9008f6901a63f4c WatchSource:0}: Error finding container dd57d9c1e52f398a835a3a8d9c99f0c1ae6b500c97490f43d9008f6901a63f4c: Status 404 returned error can't find the container with id dd57d9c1e52f398a835a3a8d9c99f0c1ae6b500c97490f43d9008f6901a63f4c Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.493140 4778 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.493600 4778 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.493827 4778 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.494011 4778 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.494374 4778 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: I0312 13:14:26.494428 4778 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.494768 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="200ms" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.709852 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="400ms" Mar 12 13:14:26 crc kubenswrapper[4778]: I0312 13:14:26.829560 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"dd57d9c1e52f398a835a3a8d9c99f0c1ae6b500c97490f43d9008f6901a63f4c"} Mar 12 13:14:26 crc kubenswrapper[4778]: I0312 13:14:26.832765 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.854281 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:14:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:14:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:14:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:14:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0e6908b5c2800b56584a3fdf3bc164b76cb945966a49103123dabb61f8e367f2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ad31505e97766fe3b9d49abfe33098361de32a828c13e290be714f02a7ee76e0\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221788890},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.855009 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.855431 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.855760 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.856152 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:26 crc kubenswrapper[4778]: E0312 13:14:26.856252 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:14:27 crc kubenswrapper[4778]: E0312 13:14:27.111269 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="800ms" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.840343 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8n9b" event={"ID":"c27afe2a-3402-49f9-b985-45fe67e40d22","Type":"ContainerStarted","Data":"3686a4e289950327029466c928723a8314f5dcaa797637ff0db63d9aa4aeb5db"} Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.841457 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.841889 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.843160 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.843199 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx9d8" event={"ID":"651601bd-18fe-4ca1-9c61-481ca568d022","Type":"ContainerStarted","Data":"13189da41e0fb30fa7cca9718222038a2b578d40c4f21c5e350b74e753b85587"} Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.845132 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjk9p" event={"ID":"3b3fb69e-dd4f-4787-a207-4fe25106f9e7","Type":"ContainerStarted","Data":"7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06"} Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.845520 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.847571 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khr6h" event={"ID":"1d185732-cd6b-44c6-b4db-ee9ade00c683","Type":"ContainerStarted","Data":"b352e6584b478e7228a408cc5d6c8b18473e75a0de7be819c32ae9b98a707a4e"} Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.850042 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.852077 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.852570 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.852843 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.852946 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.853091 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.853423 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.853771 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.855075 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.855124 4778 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1" exitCode=0 Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.855390 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.855604 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.855780 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.855925 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.856135 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.856938 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae"} Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.857781 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.857927 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.858146 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.858718 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.859279 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.859795 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.860086 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.860603 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtjz5" event={"ID":"b9bef112-9bef-4ce2-abd8-054b4d671658","Type":"ContainerStarted","Data":"3151ddc8cb64182fd7ccd241e4580f2e0243328e43f1e59366f60b980b160490"} Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.861585 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.861764 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.861915 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.862067 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.862230 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.862372 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.862514 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: I0312 13:14:27.862654 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:27 crc kubenswrapper[4778]: E0312 13:14:27.912611 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="1.6s" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.918045 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.918830 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.919455 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.919738 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.920290 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.920982 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.921266 4778 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.921517 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.921819 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.922209 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:28 crc kubenswrapper[4778]: I0312 13:14:28.922496 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.004659 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.004783 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.004811 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.004807 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.004902 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.004991 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.005351 4778 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.005371 4778 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.005381 4778 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:29 crc kubenswrapper[4778]: E0312 13:14:29.352195 4778 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.32:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" volumeName="registry-storage" Mar 12 13:14:29 crc kubenswrapper[4778]: E0312 13:14:29.513482 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="3.2s" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.884147 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.886104 4778 scope.go:117] "RemoveContainer" containerID="5019c5de667abecf425384b69c58060050b28003230e410f44934c9a7ad5484c" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.886223 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.901482 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.902112 4778 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.902451 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.903936 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.904488 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.904780 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.905024 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.905575 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.906040 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.908911 4778 scope.go:117] "RemoveContainer" containerID="bdfb81ab3f0178dc8064bd278e9e5cc42b3b2fda7282bb869d2f385b423e57d0" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.931551 4778 scope.go:117] "RemoveContainer" containerID="bc7259359df220c534d265305ee3ca44e7bcdce8da0d8b164132e02f7ed72e51" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.949347 4778 scope.go:117] "RemoveContainer" containerID="2d60adb329e51ce7d877de68c1386f904ef0f717c82a5bfb69ab18438a4e536a" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.965275 4778 scope.go:117] "RemoveContainer" containerID="7f640289dea724d5668fc009d628345ea104b2bbc9bc3471e42c3ec5f9acada1" Mar 12 13:14:29 crc kubenswrapper[4778]: I0312 13:14:29.979777 4778 scope.go:117] "RemoveContainer" containerID="e64aa9b1a15198d88b5f38b8ad0abdeef89430869b6f25c73e2f45806c539964" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.263961 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.892827 4778 generic.go:334] "Generic (PLEG): container finished" podID="a868c6a4-19ec-46be-a0af-be25b1049ff3" containerID="35dc89f42df73eafd54f7518d380b5b4f6934732de9c6dd0209b64b9345aa66c" exitCode=0 Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.892906 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a868c6a4-19ec-46be-a0af-be25b1049ff3","Type":"ContainerDied","Data":"35dc89f42df73eafd54f7518d380b5b4f6934732de9c6dd0209b64b9345aa66c"} Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.893450 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.893719 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.894049 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.894337 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.894587 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.894822 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.895029 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.895241 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:30 crc kubenswrapper[4778]: I0312 13:14:30.895457 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:31 crc kubenswrapper[4778]: E0312 13:14:31.611503 4778 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.32:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-sjk9p.189c1a4bbda7962d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-sjk9p,UID:3b3fb69e-dd4f-4787-a207-4fe25106f9e7,APIVersion:v1,ResourceVersion:28313,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 16.256s (16.256s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-12 13:14:25.883878957 +0000 UTC m=+284.332574353,LastTimestamp:2026-03-12 13:14:25.883878957 +0000 UTC m=+284.332574353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.188275 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.188755 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.189068 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.189244 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.189393 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.189529 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.189671 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.189809 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.190006 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.190212 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.250342 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-var-lock\") pod \"a868c6a4-19ec-46be-a0af-be25b1049ff3\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.250444 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-var-lock" (OuterVolumeSpecName: "var-lock") pod "a868c6a4-19ec-46be-a0af-be25b1049ff3" (UID: "a868c6a4-19ec-46be-a0af-be25b1049ff3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.250803 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a868c6a4-19ec-46be-a0af-be25b1049ff3-kube-api-access\") pod \"a868c6a4-19ec-46be-a0af-be25b1049ff3\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.250906 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-kubelet-dir\") pod \"a868c6a4-19ec-46be-a0af-be25b1049ff3\" (UID: \"a868c6a4-19ec-46be-a0af-be25b1049ff3\") " Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.250962 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a868c6a4-19ec-46be-a0af-be25b1049ff3" (UID: "a868c6a4-19ec-46be-a0af-be25b1049ff3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.251594 4778 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-var-lock\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.251655 4778 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a868c6a4-19ec-46be-a0af-be25b1049ff3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.255849 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a868c6a4-19ec-46be-a0af-be25b1049ff3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a868c6a4-19ec-46be-a0af-be25b1049ff3" (UID: "a868c6a4-19ec-46be-a0af-be25b1049ff3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.256959 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.257359 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.257578 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.257822 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.258223 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.258689 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.259009 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.259549 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.259754 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.352897 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a868c6a4-19ec-46be-a0af-be25b1049ff3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:32 crc kubenswrapper[4778]: E0312 13:14:32.714568 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="6.4s" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.907796 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a868c6a4-19ec-46be-a0af-be25b1049ff3","Type":"ContainerDied","Data":"67b06efe996403c5470e41a5f9a62a78fe522b551d7ec62d8302163676162a07"} Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.908134 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67b06efe996403c5470e41a5f9a62a78fe522b551d7ec62d8302163676162a07" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.907844 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.913419 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.913965 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.914412 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.914763 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.915234 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.915619 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.915954 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.916312 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:32 crc kubenswrapper[4778]: I0312 13:14:32.916774 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.693412 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.693497 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.754352 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.755133 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.755642 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.756045 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.756292 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.756568 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.756920 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.757204 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.757501 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.757830 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.863099 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.863164 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.913410 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.914068 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.914515 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.915121 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.915399 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.915749 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.916332 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.917043 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.917514 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.917875 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.959363 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.959841 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.960144 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.960539 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.960780 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.961015 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.961234 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.961297 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.961451 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.961694 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.961955 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.962316 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.962558 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.962808 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.963074 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.963322 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.963559 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.963777 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.964023 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:33 crc kubenswrapper[4778]: I0312 13:14:33.964335 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.020155 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.020239 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.056320 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.056952 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.057713 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.058404 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.058809 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.059260 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.059672 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.059996 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.060354 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.060791 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.309136 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.309230 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.355537 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.356073 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.356385 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.356737 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.356964 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.357216 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.357432 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.357642 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.357854 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.358075 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.972345 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.973104 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.973901 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.974464 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.974892 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.975406 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.975898 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.976283 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.976573 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.976995 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.989017 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.989748 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.990069 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.990705 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.991225 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.991760 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.992353 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.993095 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.993608 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:34 crc kubenswrapper[4778]: I0312 13:14:34.994258 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.024221 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.024305 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.088922 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.089889 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.090432 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.090995 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.091417 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.091746 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.092166 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.092646 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.092954 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.093472 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.980078 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.981015 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.981513 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.982119 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.982802 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.983148 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.983727 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.983979 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.984295 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:36 crc kubenswrapper[4778]: I0312 13:14:36.984858 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: E0312 13:14:37.243894 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:14:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:14:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:14:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-12T13:14:37Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:0d4c830b2653f2eeffebd09537afb06afb5ae827adbc03f224ab7269f399c05c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d6065909bc521a3f9a85174276fdbceafad02a276449a7dd1952a1f689b0d362\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1735807445},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:185237e125a9d710a58d4b588ea6b75eb361e4e99d979c1acd193de3b5d787f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:746054bb64fa0b27b1a696cd5db508bb9ee883a94969e4c1c4b5d35a93da8ef5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1281521882},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0e6908b5c2800b56584a3fdf3bc164b76cb945966a49103123dabb61f8e367f2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:ad31505e97766fe3b9d49abfe33098361de32a828c13e290be714f02a7ee76e0\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221788890},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: E0312 13:14:37.244409 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: E0312 13:14:37.244632 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: E0312 13:14:37.244854 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: E0312 13:14:37.244999 4778 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: E0312 13:14:37.245012 4778 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.253362 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.254000 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.254705 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.255095 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.255579 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.256071 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.256533 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.256824 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.257116 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.257415 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.272931 4778 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.272958 4778 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:37 crc kubenswrapper[4778]: E0312 13:14:37.273571 4778 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.274496 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:37 crc kubenswrapper[4778]: W0312 13:14:37.293824 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2024a3d05ac9cecb262ce575a431d92a7721c2423373b7acb5e57e5d294feecd WatchSource:0}: Error finding container 2024a3d05ac9cecb262ce575a431d92a7721c2423373b7acb5e57e5d294feecd: Status 404 returned error can't find the container with id 2024a3d05ac9cecb262ce575a431d92a7721c2423373b7acb5e57e5d294feecd Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.680347 4778 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.680784 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.958007 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2024a3d05ac9cecb262ce575a431d92a7721c2423373b7acb5e57e5d294feecd"} Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.961062 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.961951 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.962094 4778 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="62d772ee1ff9d986b4311494a08c8763bd91704fda6cd9c6f067c98205a4067d" exitCode=1 Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.962138 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"62d772ee1ff9d986b4311494a08c8763bd91704fda6cd9c6f067c98205a4067d"} Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.963497 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.963547 4778 scope.go:117] "RemoveContainer" containerID="62d772ee1ff9d986b4311494a08c8763bd91704fda6cd9c6f067c98205a4067d" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.963763 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.964016 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.964333 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.965018 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.965576 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.966404 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.967121 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.967748 4778 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:37 crc kubenswrapper[4778]: I0312 13:14:37.968159 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.420929 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.970457 4778 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c3ba0ec0d045f651180121b0ffc75036762171c599bf61f1dbb06d5e91189e14" exitCode=0 Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.970556 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c3ba0ec0d045f651180121b0ffc75036762171c599bf61f1dbb06d5e91189e14"} Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.970859 4778 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.970874 4778 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:38 crc kubenswrapper[4778]: E0312 13:14:38.971605 4778 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.971881 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.972392 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.972875 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.973156 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.973459 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.973830 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.974588 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.974686 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.975008 4778 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.975245 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.975292 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b062a3e9d7d1fd4f8252de53fc5e70585f0a129e0886c22103d39b30c3ecf110"} Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.975369 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.975621 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.975994 4778 status_manager.go:851] "Failed to get status for pod" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" pod="openshift-marketplace/certified-operators-qx9d8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qx9d8\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.976298 4778 status_manager.go:851] "Failed to get status for pod" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" pod="openshift-marketplace/community-operators-khr6h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-khr6h\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.976597 4778 status_manager.go:851] "Failed to get status for pod" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" pod="openshift-marketplace/certified-operators-l8n9b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-l8n9b\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.976931 4778 status_manager.go:851] "Failed to get status for pod" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" pod="openshift-marketplace/redhat-marketplace-rtjz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rtjz5\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.977233 4778 status_manager.go:851] "Failed to get status for pod" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" pod="openshift-marketplace/redhat-marketplace-8xksl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-8xksl\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.977528 4778 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.977764 4778 status_manager.go:851] "Failed to get status for pod" podUID="34ecd758-517c-455a-939a-7eb6d3546854" pod="openshift-marketplace/redhat-operators-76s88" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-76s88\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.978024 4778 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.978303 4778 status_manager.go:851] "Failed to get status for pod" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" pod="openshift-marketplace/community-operators-sjk9p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-sjk9p\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:38 crc kubenswrapper[4778]: I0312 13:14:38.978643 4778 status_manager.go:851] "Failed to get status for pod" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.32:6443: connect: connection refused" Mar 12 13:14:39 crc kubenswrapper[4778]: E0312 13:14:39.115198 4778 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.32:6443: connect: connection refused" interval="7s" Mar 12 13:14:39 crc kubenswrapper[4778]: I0312 13:14:39.985423 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"19c733da1c83d3d6ba0682c73ec41046fadd8031ec4303ee9495e0db3e977a6d"} Mar 12 13:14:39 crc kubenswrapper[4778]: I0312 13:14:39.985485 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0f85a6b014ee32c1af2cc5c218ef1c7fb4a60d752368d33b696855f34eac6c5b"} Mar 12 13:14:39 crc kubenswrapper[4778]: I0312 13:14:39.985497 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b63f23fa8170e58a310dd20337f65adb16b8d5342fa6ba2ed14f4ddc1fbd544c"} Mar 12 13:14:40 crc kubenswrapper[4778]: I0312 13:14:40.997041 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e4feca0a2e0663b03318b7fdaf2f14c43fb0c91c647627501b7d93e076309212"} Mar 12 13:14:40 crc kubenswrapper[4778]: I0312 13:14:40.997505 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"51cac27142c0b53681a48ca8823e0010131e54f580c2880e477a076b3a7108e9"} Mar 12 13:14:40 crc kubenswrapper[4778]: I0312 13:14:40.997523 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:40 crc kubenswrapper[4778]: I0312 13:14:40.997286 4778 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:40 crc kubenswrapper[4778]: I0312 13:14:40.997542 4778 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:41 crc kubenswrapper[4778]: I0312 13:14:41.850536 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" podUID="f36ec67c-df24-46ce-94b9-10619822c15a" containerName="oauth-openshift" containerID="cri-o://0fe97ea87ef2b2f3106d61689b8bc6549f4b603dd4e79e424ddbe8637587b2f3" gracePeriod=15 Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.007305 4778 generic.go:334] "Generic (PLEG): container finished" podID="f36ec67c-df24-46ce-94b9-10619822c15a" containerID="0fe97ea87ef2b2f3106d61689b8bc6549f4b603dd4e79e424ddbe8637587b2f3" exitCode=0 Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.007351 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" event={"ID":"f36ec67c-df24-46ce-94b9-10619822c15a","Type":"ContainerDied","Data":"0fe97ea87ef2b2f3106d61689b8bc6549f4b603dd4e79e424ddbe8637587b2f3"} Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.236780 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.274868 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.274909 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.280095 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299287 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-serving-cert\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299331 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-provider-selection\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299354 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-idp-0-file-data\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299376 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-router-certs\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299410 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-session\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299442 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-audit-policies\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299458 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-trusted-ca-bundle\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299479 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36ec67c-df24-46ce-94b9-10619822c15a-audit-dir\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299505 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-login\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299520 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-ocp-branding-template\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299540 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-error\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299556 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-cliconfig\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299589 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrlvl\" (UniqueName: \"kubernetes.io/projected/f36ec67c-df24-46ce-94b9-10619822c15a-kube-api-access-xrlvl\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.299630 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-service-ca\") pod \"f36ec67c-df24-46ce-94b9-10619822c15a\" (UID: \"f36ec67c-df24-46ce-94b9-10619822c15a\") " Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.301058 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.301204 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.301157 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.301411 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f36ec67c-df24-46ce-94b9-10619822c15a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.301519 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.305053 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36ec67c-df24-46ce-94b9-10619822c15a-kube-api-access-xrlvl" (OuterVolumeSpecName: "kube-api-access-xrlvl") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "kube-api-access-xrlvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.305255 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.305350 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.305560 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.305922 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.306250 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.306442 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.306622 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.311561 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f36ec67c-df24-46ce-94b9-10619822c15a" (UID: "f36ec67c-df24-46ce-94b9-10619822c15a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400620 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400648 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400660 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400670 4778 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400679 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400688 4778 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f36ec67c-df24-46ce-94b9-10619822c15a-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400698 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400707 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400716 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400724 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400733 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrlvl\" (UniqueName: \"kubernetes.io/projected/f36ec67c-df24-46ce-94b9-10619822c15a-kube-api-access-xrlvl\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400741 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400748 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:42 crc kubenswrapper[4778]: I0312 13:14:42.400757 4778 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f36ec67c-df24-46ce-94b9-10619822c15a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 12 13:14:43 crc kubenswrapper[4778]: I0312 13:14:43.015047 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" event={"ID":"f36ec67c-df24-46ce-94b9-10619822c15a","Type":"ContainerDied","Data":"5f7362fc7516f559081256deebf693613a994486c74f126dfda003689ad66bff"} Mar 12 13:14:43 crc kubenswrapper[4778]: I0312 13:14:43.016433 4778 scope.go:117] "RemoveContainer" containerID="0fe97ea87ef2b2f3106d61689b8bc6549f4b603dd4e79e424ddbe8637587b2f3" Mar 12 13:14:43 crc kubenswrapper[4778]: I0312 13:14:43.015098 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5kw4v" Mar 12 13:14:46 crc kubenswrapper[4778]: I0312 13:14:46.006224 4778 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:46 crc kubenswrapper[4778]: I0312 13:14:46.038181 4778 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:46 crc kubenswrapper[4778]: I0312 13:14:46.038231 4778 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:46 crc kubenswrapper[4778]: I0312 13:14:46.044380 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:14:46 crc kubenswrapper[4778]: I0312 13:14:46.264961 4778 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="20ab8792-6c0f-49d4-993b-8fde07fc779e" Mar 12 13:14:46 crc kubenswrapper[4778]: E0312 13:14:46.911284 4778 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Mar 12 13:14:46 crc kubenswrapper[4778]: E0312 13:14:46.958618 4778 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Mar 12 13:14:47 crc kubenswrapper[4778]: I0312 13:14:47.052222 4778 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:47 crc kubenswrapper[4778]: I0312 13:14:47.052267 4778 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d15dec8c-5c3e-4103-a5b1-6ee7ff5990ef" Mar 12 13:14:47 crc kubenswrapper[4778]: I0312 13:14:47.057001 4778 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="20ab8792-6c0f-49d4-993b-8fde07fc779e" Mar 12 13:14:47 crc kubenswrapper[4778]: E0312 13:14:47.119607 4778 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Mar 12 13:14:47 crc kubenswrapper[4778]: I0312 13:14:47.203879 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:14:47 crc kubenswrapper[4778]: I0312 13:14:47.207396 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:14:48 crc kubenswrapper[4778]: I0312 13:14:48.059795 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:14:48 crc kubenswrapper[4778]: I0312 13:14:48.067005 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 12 13:14:56 crc kubenswrapper[4778]: I0312 13:14:56.281341 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 12 13:14:56 crc kubenswrapper[4778]: I0312 13:14:56.473658 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 12 13:14:56 crc kubenswrapper[4778]: I0312 13:14:56.703230 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 12 13:14:56 crc kubenswrapper[4778]: I0312 13:14:56.872372 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 12 13:14:56 crc kubenswrapper[4778]: I0312 13:14:56.932759 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 12 13:14:57 crc kubenswrapper[4778]: I0312 13:14:57.048687 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 12 13:14:57 crc kubenswrapper[4778]: I0312 13:14:57.143745 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 12 13:14:57 crc kubenswrapper[4778]: I0312 13:14:57.427767 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 12 13:14:57 crc kubenswrapper[4778]: I0312 13:14:57.463891 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 12 13:14:57 crc kubenswrapper[4778]: I0312 13:14:57.771172 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 12 13:14:57 crc kubenswrapper[4778]: I0312 13:14:57.849277 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 12 13:14:57 crc kubenswrapper[4778]: I0312 13:14:57.945423 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 12 13:14:57 crc kubenswrapper[4778]: I0312 13:14:57.947703 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 12 13:14:58 crc kubenswrapper[4778]: I0312 13:14:58.004015 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 12 13:14:58 crc kubenswrapper[4778]: I0312 13:14:58.220041 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 12 13:14:58 crc kubenswrapper[4778]: I0312 13:14:58.299932 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 12 13:14:58 crc kubenswrapper[4778]: I0312 13:14:58.674143 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 12 13:14:58 crc kubenswrapper[4778]: I0312 13:14:58.770169 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 12 13:14:58 crc kubenswrapper[4778]: I0312 13:14:58.891958 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 12 13:14:58 crc kubenswrapper[4778]: I0312 13:14:58.894893 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 12 13:14:58 crc kubenswrapper[4778]: I0312 13:14:58.897052 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.028112 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.061656 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.134006 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.157003 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.247125 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.282864 4778 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.323556 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.699876 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.796175 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.806427 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.829805 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.866175 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.937907 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.971970 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 12 13:14:59 crc kubenswrapper[4778]: I0312 13:14:59.993587 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.049848 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.050593 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.062028 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.063271 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.105936 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.164651 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.177436 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.334284 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.493623 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.593463 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.647337 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.662073 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.719570 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.875627 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.901346 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.932115 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 13:15:00 crc kubenswrapper[4778]: I0312 13:15:00.977587 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.038601 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.091131 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.102527 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.160030 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.173701 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.214123 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.268284 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.330272 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.432860 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.564158 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.583337 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.633617 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.669482 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.671506 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.688210 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.717745 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.749837 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.760734 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 12 13:15:01 crc kubenswrapper[4778]: I0312 13:15:01.940753 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.070431 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.074584 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.160301 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.229274 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.238227 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.299254 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.310097 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.358884 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.442282 4778 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.443068 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rtjz5" podStartSLOduration=38.407977804 podStartE2EDuration="2m7.443046296s" podCreationTimestamp="2026-03-12 13:12:55 +0000 UTC" firstStartedPulling="2026-03-12 13:12:57.064590534 +0000 UTC m=+195.513285930" lastFinishedPulling="2026-03-12 13:14:26.099658986 +0000 UTC m=+284.548354422" observedRunningTime="2026-03-12 13:14:46.061051499 +0000 UTC m=+304.509746915" watchObservedRunningTime="2026-03-12 13:15:02.443046296 +0000 UTC m=+320.891741692" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.444544 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qx9d8" podStartSLOduration=39.516571354 podStartE2EDuration="2m9.444533483s" podCreationTimestamp="2026-03-12 13:12:53 +0000 UTC" firstStartedPulling="2026-03-12 13:12:55.9649896 +0000 UTC m=+194.413684996" lastFinishedPulling="2026-03-12 13:14:25.892951739 +0000 UTC m=+284.341647125" observedRunningTime="2026-03-12 13:14:46.154097241 +0000 UTC m=+304.602792637" watchObservedRunningTime="2026-03-12 13:15:02.444533483 +0000 UTC m=+320.893228879" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.445794 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l8n9b" podStartSLOduration=42.824742371 podStartE2EDuration="2m9.445784314s" podCreationTimestamp="2026-03-12 13:12:53 +0000 UTC" firstStartedPulling="2026-03-12 13:12:55.827683035 +0000 UTC m=+194.276378431" lastFinishedPulling="2026-03-12 13:14:22.448724978 +0000 UTC m=+280.897420374" observedRunningTime="2026-03-12 13:14:46.028791911 +0000 UTC m=+304.477487327" watchObservedRunningTime="2026-03-12 13:15:02.445784314 +0000 UTC m=+320.894479710" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.447586 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.447574871 podStartE2EDuration="38.447574871s" podCreationTimestamp="2026-03-12 13:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:14:46.100152366 +0000 UTC m=+304.548847782" watchObservedRunningTime="2026-03-12 13:15:02.447574871 +0000 UTC m=+320.896270267" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.448045 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sjk9p" podStartSLOduration=39.463259805 podStartE2EDuration="2m9.448038256s" podCreationTimestamp="2026-03-12 13:12:53 +0000 UTC" firstStartedPulling="2026-03-12 13:12:55.899088745 +0000 UTC m=+194.347784141" lastFinishedPulling="2026-03-12 13:14:25.883867196 +0000 UTC m=+284.332562592" observedRunningTime="2026-03-12 13:14:46.113514356 +0000 UTC m=+304.562209792" watchObservedRunningTime="2026-03-12 13:15:02.448038256 +0000 UTC m=+320.896733652" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.449559 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khr6h" podStartSLOduration=46.815371593 podStartE2EDuration="2m9.449548285s" podCreationTimestamp="2026-03-12 13:12:53 +0000 UTC" firstStartedPulling="2026-03-12 13:12:54.777716279 +0000 UTC m=+193.226411675" lastFinishedPulling="2026-03-12 13:14:17.411892951 +0000 UTC m=+275.860588367" observedRunningTime="2026-03-12 13:14:46.192175526 +0000 UTC m=+304.640870922" watchObservedRunningTime="2026-03-12 13:15:02.449548285 +0000 UTC m=+320.898243681" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.450624 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-76s88","openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-5kw4v"] Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.450697 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.459585 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.467853 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.474638 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.474618431 podStartE2EDuration="16.474618431s" podCreationTimestamp="2026-03-12 13:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:15:02.473065981 +0000 UTC m=+320.921761387" watchObservedRunningTime="2026-03-12 13:15:02.474618431 +0000 UTC m=+320.923313827" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.514171 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.528290 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.613572 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.684864 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.721527 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.741610 4778 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.753237 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.756249 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.804623 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.887318 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 12 13:15:02 crc kubenswrapper[4778]: I0312 13:15:02.897856 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.124927 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.229333 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.273931 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.381129 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.451412 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.475908 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.483352 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.591083 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.622649 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 12 13:15:03 crc kubenswrapper[4778]: I0312 13:15:03.914501 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.117501 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.260033 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ecd758-517c-455a-939a-7eb6d3546854" path="/var/lib/kubelet/pods/34ecd758-517c-455a-939a-7eb6d3546854/volumes" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.260690 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36ec67c-df24-46ce-94b9-10619822c15a" path="/var/lib/kubelet/pods/f36ec67c-df24-46ce-94b9-10619822c15a/volumes" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.283265 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.316463 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.317149 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.327975 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.336266 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.500261 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.591923 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.641495 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.670605 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.672639 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.673939 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.695880 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.945122 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.985010 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.986520 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987058 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555355-68226"] Mar 12 13:15:04 crc kubenswrapper[4778]: E0312 13:15:04.987263 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ecd758-517c-455a-939a-7eb6d3546854" containerName="registry-server" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987280 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ecd758-517c-455a-939a-7eb6d3546854" containerName="registry-server" Mar 12 13:15:04 crc kubenswrapper[4778]: E0312 13:15:04.987291 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ecd758-517c-455a-939a-7eb6d3546854" containerName="extract-utilities" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987297 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ecd758-517c-455a-939a-7eb6d3546854" containerName="extract-utilities" Mar 12 13:15:04 crc kubenswrapper[4778]: E0312 13:15:04.987305 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ecd758-517c-455a-939a-7eb6d3546854" containerName="extract-content" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987312 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ecd758-517c-455a-939a-7eb6d3546854" containerName="extract-content" Mar 12 13:15:04 crc kubenswrapper[4778]: E0312 13:15:04.987327 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f36ec67c-df24-46ce-94b9-10619822c15a" containerName="oauth-openshift" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987333 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f36ec67c-df24-46ce-94b9-10619822c15a" containerName="oauth-openshift" Mar 12 13:15:04 crc kubenswrapper[4778]: E0312 13:15:04.987343 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" containerName="installer" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987348 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" containerName="installer" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987444 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f36ec67c-df24-46ce-94b9-10619822c15a" containerName="oauth-openshift" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987455 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a868c6a4-19ec-46be-a0af-be25b1049ff3" containerName="installer" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987462 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ecd758-517c-455a-939a-7eb6d3546854" containerName="registry-server" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.987767 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.990045 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 13:15:04 crc kubenswrapper[4778]: I0312 13:15:04.990382 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.052860 4778 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.086969 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.116757 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25sw\" (UniqueName: \"kubernetes.io/projected/6197b3a9-f02f-4e5d-8196-b617fffa467d-kube-api-access-d25sw\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.116825 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6197b3a9-f02f-4e5d-8196-b617fffa467d-config-volume\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.116883 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6197b3a9-f02f-4e5d-8196-b617fffa467d-secret-volume\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.123936 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.182374 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.182862 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.196230 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.217665 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d25sw\" (UniqueName: \"kubernetes.io/projected/6197b3a9-f02f-4e5d-8196-b617fffa467d-kube-api-access-d25sw\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.217753 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6197b3a9-f02f-4e5d-8196-b617fffa467d-config-volume\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.217819 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6197b3a9-f02f-4e5d-8196-b617fffa467d-secret-volume\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.219025 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6197b3a9-f02f-4e5d-8196-b617fffa467d-config-volume\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.225574 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.227327 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.229566 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6197b3a9-f02f-4e5d-8196-b617fffa467d-secret-volume\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.240780 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d25sw\" (UniqueName: \"kubernetes.io/projected/6197b3a9-f02f-4e5d-8196-b617fffa467d-kube-api-access-d25sw\") pod \"collect-profiles-29555355-68226\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.251212 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.262571 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.286413 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.306515 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.314750 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.336764 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.454628 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.532713 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.534225 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.547714 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.583227 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.607835 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.705421 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.709387 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.781586 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.851522 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 12 13:15:05 crc kubenswrapper[4778]: I0312 13:15:05.974170 4778 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.019117 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.090089 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.102665 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.132343 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.207073 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.208023 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.279168 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.366515 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.366754 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.398644 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.474892 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.509130 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.513555 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.513709 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.519691 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.593888 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.841220 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.856310 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 13:15:06 crc kubenswrapper[4778]: I0312 13:15:06.940547 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.017275 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.035882 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.042266 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.302703 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.308866 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.321860 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.339412 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.340672 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.355332 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.419250 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.457774 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.555636 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.599140 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.641649 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx"] Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.642338 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.644240 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.644876 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.646628 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.646636 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.646905 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.647061 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.647688 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.647868 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.648168 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.648481 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.650462 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.650709 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.653498 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.656633 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.661304 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.695049 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.740056 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760064 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-error\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760123 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760151 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760173 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760235 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/88f550ae-456c-496a-ae0c-e0e063022780-audit-dir\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760325 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-audit-policies\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760387 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-login\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760524 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760564 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760599 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-session\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760627 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760672 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760726 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.760765 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2pf8\" (UniqueName: \"kubernetes.io/projected/88f550ae-456c-496a-ae0c-e0e063022780-kube-api-access-f2pf8\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.800367 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.828387 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.830785 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.843474 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.861596 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862243 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-login\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862316 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862361 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862390 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-session\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862414 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862446 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862473 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862499 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2pf8\" (UniqueName: \"kubernetes.io/projected/88f550ae-456c-496a-ae0c-e0e063022780-kube-api-access-f2pf8\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862528 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-error\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862572 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862598 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862622 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862649 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/88f550ae-456c-496a-ae0c-e0e063022780-audit-dir\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.862681 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-audit-policies\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.863450 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/88f550ae-456c-496a-ae0c-e0e063022780-audit-dir\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.863653 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-audit-policies\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.863802 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.864298 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.865361 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.868864 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.869287 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.869477 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-session\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.869973 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.870243 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.874228 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-login\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.875275 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.884458 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/88f550ae-456c-496a-ae0c-e0e063022780-v4-0-config-user-template-error\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.885747 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2pf8\" (UniqueName: \"kubernetes.io/projected/88f550ae-456c-496a-ae0c-e0e063022780-kube-api-access-f2pf8\") pod \"oauth-openshift-7d48c8fbd6-kk9gx\" (UID: \"88f550ae-456c-496a-ae0c-e0e063022780\") " pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.919354 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.962481 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:07 crc kubenswrapper[4778]: I0312 13:15:07.995505 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.001630 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.058630 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.117885 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.117887 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.160870 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.177243 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.209722 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.341782 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.395466 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.438475 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.468852 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.500986 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.566111 4778 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.566578 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae" gracePeriod=5 Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.605007 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.631241 4778 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.682734 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.694905 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.734073 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.761100 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.908331 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.928941 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 12 13:15:08 crc kubenswrapper[4778]: I0312 13:15:08.963052 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 12 13:15:09 crc kubenswrapper[4778]: I0312 13:15:09.011794 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 12 13:15:09 crc kubenswrapper[4778]: I0312 13:15:09.194009 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 12 13:15:09 crc kubenswrapper[4778]: I0312 13:15:09.203312 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 13:15:09 crc kubenswrapper[4778]: I0312 13:15:09.374065 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 12 13:15:09 crc kubenswrapper[4778]: I0312 13:15:09.692534 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 12 13:15:09 crc kubenswrapper[4778]: I0312 13:15:09.951335 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 13:15:09 crc kubenswrapper[4778]: I0312 13:15:09.983441 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.344775 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.452657 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.469479 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.588857 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.588912 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.693055 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.732288 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.767768 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx"] Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.775824 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555355-68226"] Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.862505 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 13:15:10 crc kubenswrapper[4778]: I0312 13:15:10.896597 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.082119 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.208571 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555355-68226"] Mar 12 13:15:11 crc kubenswrapper[4778]: W0312 13:15:11.212133 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6197b3a9_f02f_4e5d_8196_b617fffa467d.slice/crio-9fef4c59e32339bdfc08c1427d6779038e13657c03fafb52c99c44018f2fa182 WatchSource:0}: Error finding container 9fef4c59e32339bdfc08c1427d6779038e13657c03fafb52c99c44018f2fa182: Status 404 returned error can't find the container with id 9fef4c59e32339bdfc08c1427d6779038e13657c03fafb52c99c44018f2fa182 Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.255553 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.263500 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx"] Mar 12 13:15:11 crc kubenswrapper[4778]: W0312 13:15:11.268656 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f550ae_456c_496a_ae0c_e0e063022780.slice/crio-d11163752c2ad0bf906f93a493f49241d8588cc59428d3f435eb62ef394d3b28 WatchSource:0}: Error finding container d11163752c2ad0bf906f93a493f49241d8588cc59428d3f435eb62ef394d3b28: Status 404 returned error can't find the container with id d11163752c2ad0bf906f93a493f49241d8588cc59428d3f435eb62ef394d3b28 Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.291536 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.417160 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.444606 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.611786 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.706603 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.877221 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.891328 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 12 13:15:11 crc kubenswrapper[4778]: I0312 13:15:11.933409 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.063720 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.204887 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" event={"ID":"88f550ae-456c-496a-ae0c-e0e063022780","Type":"ContainerStarted","Data":"bd045c8915afd2aafd5594b3924558bebda0349833512a1c5201d36a65cbcfed"} Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.205127 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" event={"ID":"88f550ae-456c-496a-ae0c-e0e063022780","Type":"ContainerStarted","Data":"d11163752c2ad0bf906f93a493f49241d8588cc59428d3f435eb62ef394d3b28"} Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.205156 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.208274 4778 generic.go:334] "Generic (PLEG): container finished" podID="6197b3a9-f02f-4e5d-8196-b617fffa467d" containerID="3954f4afdb430b04a44fc16681134a45669f465399452c67b26950fbb78cb40a" exitCode=0 Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.208343 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" event={"ID":"6197b3a9-f02f-4e5d-8196-b617fffa467d","Type":"ContainerDied","Data":"3954f4afdb430b04a44fc16681134a45669f465399452c67b26950fbb78cb40a"} Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.208387 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" event={"ID":"6197b3a9-f02f-4e5d-8196-b617fffa467d","Type":"ContainerStarted","Data":"9fef4c59e32339bdfc08c1427d6779038e13657c03fafb52c99c44018f2fa182"} Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.211367 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.216548 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.258787 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d48c8fbd6-kk9gx" podStartSLOduration=56.258768095 podStartE2EDuration="56.258768095s" podCreationTimestamp="2026-03-12 13:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:15:12.233578217 +0000 UTC m=+330.682273753" watchObservedRunningTime="2026-03-12 13:15:12.258768095 +0000 UTC m=+330.707463491" Mar 12 13:15:12 crc kubenswrapper[4778]: I0312 13:15:12.485658 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.437090 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.514699 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.537033 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d25sw\" (UniqueName: \"kubernetes.io/projected/6197b3a9-f02f-4e5d-8196-b617fffa467d-kube-api-access-d25sw\") pod \"6197b3a9-f02f-4e5d-8196-b617fffa467d\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.537366 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6197b3a9-f02f-4e5d-8196-b617fffa467d-secret-volume\") pod \"6197b3a9-f02f-4e5d-8196-b617fffa467d\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.537474 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6197b3a9-f02f-4e5d-8196-b617fffa467d-config-volume\") pod \"6197b3a9-f02f-4e5d-8196-b617fffa467d\" (UID: \"6197b3a9-f02f-4e5d-8196-b617fffa467d\") " Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.537938 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6197b3a9-f02f-4e5d-8196-b617fffa467d-config-volume" (OuterVolumeSpecName: "config-volume") pod "6197b3a9-f02f-4e5d-8196-b617fffa467d" (UID: "6197b3a9-f02f-4e5d-8196-b617fffa467d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.542173 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6197b3a9-f02f-4e5d-8196-b617fffa467d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6197b3a9-f02f-4e5d-8196-b617fffa467d" (UID: "6197b3a9-f02f-4e5d-8196-b617fffa467d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.550095 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6197b3a9-f02f-4e5d-8196-b617fffa467d-kube-api-access-d25sw" (OuterVolumeSpecName: "kube-api-access-d25sw") pod "6197b3a9-f02f-4e5d-8196-b617fffa467d" (UID: "6197b3a9-f02f-4e5d-8196-b617fffa467d"). InnerVolumeSpecName "kube-api-access-d25sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.638317 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6197b3a9-f02f-4e5d-8196-b617fffa467d-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.638365 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d25sw\" (UniqueName: \"kubernetes.io/projected/6197b3a9-f02f-4e5d-8196-b617fffa467d-kube-api-access-d25sw\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.638381 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6197b3a9-f02f-4e5d-8196-b617fffa467d-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.818326 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d44c8b88d-jx574"] Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.818866 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" podUID="0fdd5690-0e80-4317-9e3a-8478f09ea1a8" containerName="controller-manager" containerID="cri-o://de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398" gracePeriod=30 Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.926426 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5"] Mar 12 13:15:13 crc kubenswrapper[4778]: I0312 13:15:13.928550 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" podUID="badeb3df-9c56-4aa2-af6f-aba14c213fcc" containerName="route-controller-manager" containerID="cri-o://6edfc1174eae36c4699c23b09d94a6801a70e404d52b1e50d4350988d1f6d371" gracePeriod=30 Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.008590 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.115400 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.115641 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.212172 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.219692 4778 generic.go:334] "Generic (PLEG): container finished" podID="badeb3df-9c56-4aa2-af6f-aba14c213fcc" containerID="6edfc1174eae36c4699c23b09d94a6801a70e404d52b1e50d4350988d1f6d371" exitCode=0 Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.219796 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" event={"ID":"badeb3df-9c56-4aa2-af6f-aba14c213fcc","Type":"ContainerDied","Data":"6edfc1174eae36c4699c23b09d94a6801a70e404d52b1e50d4350988d1f6d371"} Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.221002 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" event={"ID":"6197b3a9-f02f-4e5d-8196-b617fffa467d","Type":"ContainerDied","Data":"9fef4c59e32339bdfc08c1427d6779038e13657c03fafb52c99c44018f2fa182"} Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.221024 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555355-68226" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.221031 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fef4c59e32339bdfc08c1427d6779038e13657c03fafb52c99c44018f2fa182" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.222487 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.222567 4778 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae" exitCode=137 Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.222642 4778 scope.go:117] "RemoveContainer" containerID="3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.222746 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.225821 4778 generic.go:334] "Generic (PLEG): container finished" podID="0fdd5690-0e80-4317-9e3a-8478f09ea1a8" containerID="de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398" exitCode=0 Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.225952 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" event={"ID":"0fdd5690-0e80-4317-9e3a-8478f09ea1a8","Type":"ContainerDied","Data":"de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398"} Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.225993 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" event={"ID":"0fdd5690-0e80-4317-9e3a-8478f09ea1a8","Type":"ContainerDied","Data":"dd0b16a4e92ddcc0e1151ae83fdba0245e8931b7997e602261eaa93e0a982440"} Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.226039 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d44c8b88d-jx574" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.243414 4778 scope.go:117] "RemoveContainer" containerID="3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae" Mar 12 13:15:14 crc kubenswrapper[4778]: E0312 13:15:14.243914 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae\": container with ID starting with 3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae not found: ID does not exist" containerID="3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.243951 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae"} err="failed to get container status \"3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae\": rpc error: code = NotFound desc = could not find container \"3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae\": container with ID starting with 3d740724bfd8227fe2e07ff4fe5fbe18790f3387faf339232729dc31b3dd39ae not found: ID does not exist" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.243976 4778 scope.go:117] "RemoveContainer" containerID="de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246166 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246235 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246261 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-client-ca\") pod \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246287 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-serving-cert\") pod \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246307 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246339 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-proxy-ca-bundles\") pod \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246370 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246387 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246408 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246420 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246429 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246444 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-config\") pod \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246469 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwdxl\" (UniqueName: \"kubernetes.io/projected/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-kube-api-access-zwdxl\") pod \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\" (UID: \"0fdd5690-0e80-4317-9e3a-8478f09ea1a8\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246841 4778 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246876 4778 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.246890 4778 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.247126 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-client-ca" (OuterVolumeSpecName: "client-ca") pod "0fdd5690-0e80-4317-9e3a-8478f09ea1a8" (UID: "0fdd5690-0e80-4317-9e3a-8478f09ea1a8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.248950 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.250859 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-kube-api-access-zwdxl" (OuterVolumeSpecName: "kube-api-access-zwdxl") pod "0fdd5690-0e80-4317-9e3a-8478f09ea1a8" (UID: "0fdd5690-0e80-4317-9e3a-8478f09ea1a8"). InnerVolumeSpecName "kube-api-access-zwdxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.251105 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-config" (OuterVolumeSpecName: "config") pod "0fdd5690-0e80-4317-9e3a-8478f09ea1a8" (UID: "0fdd5690-0e80-4317-9e3a-8478f09ea1a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.251725 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0fdd5690-0e80-4317-9e3a-8478f09ea1a8" (UID: "0fdd5690-0e80-4317-9e3a-8478f09ea1a8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.252149 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0fdd5690-0e80-4317-9e3a-8478f09ea1a8" (UID: "0fdd5690-0e80-4317-9e3a-8478f09ea1a8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.255740 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.256840 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.263510 4778 scope.go:117] "RemoveContainer" containerID="de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398" Mar 12 13:15:14 crc kubenswrapper[4778]: E0312 13:15:14.263928 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398\": container with ID starting with de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398 not found: ID does not exist" containerID="de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.263972 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398"} err="failed to get container status \"de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398\": rpc error: code = NotFound desc = could not find container \"de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398\": container with ID starting with de9da8336c3c506af0ce9ebe2cdd9483aff7c9248c270e308d85726473f6d398 not found: ID does not exist" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.264714 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.265029 4778 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.288788 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.288829 4778 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="11f6fe78-f714-49b6-ba0c-e07eefedd97e" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.291324 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.291372 4778 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="11f6fe78-f714-49b6-ba0c-e07eefedd97e" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347064 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-config\") pod \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347351 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/badeb3df-9c56-4aa2-af6f-aba14c213fcc-serving-cert\") pod \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347399 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-client-ca\") pod \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347426 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcsqr\" (UniqueName: \"kubernetes.io/projected/badeb3df-9c56-4aa2-af6f-aba14c213fcc-kube-api-access-tcsqr\") pod \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\" (UID: \"badeb3df-9c56-4aa2-af6f-aba14c213fcc\") " Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347680 4778 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347695 4778 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347707 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347718 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwdxl\" (UniqueName: \"kubernetes.io/projected/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-kube-api-access-zwdxl\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347730 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347742 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.347753 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0fdd5690-0e80-4317-9e3a-8478f09ea1a8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.348639 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-client-ca" (OuterVolumeSpecName: "client-ca") pod "badeb3df-9c56-4aa2-af6f-aba14c213fcc" (UID: "badeb3df-9c56-4aa2-af6f-aba14c213fcc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.348664 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-config" (OuterVolumeSpecName: "config") pod "badeb3df-9c56-4aa2-af6f-aba14c213fcc" (UID: "badeb3df-9c56-4aa2-af6f-aba14c213fcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.351436 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/badeb3df-9c56-4aa2-af6f-aba14c213fcc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "badeb3df-9c56-4aa2-af6f-aba14c213fcc" (UID: "badeb3df-9c56-4aa2-af6f-aba14c213fcc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.351548 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/badeb3df-9c56-4aa2-af6f-aba14c213fcc-kube-api-access-tcsqr" (OuterVolumeSpecName: "kube-api-access-tcsqr") pod "badeb3df-9c56-4aa2-af6f-aba14c213fcc" (UID: "badeb3df-9c56-4aa2-af6f-aba14c213fcc"). InnerVolumeSpecName "kube-api-access-tcsqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.448723 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.448780 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/badeb3df-9c56-4aa2-af6f-aba14c213fcc-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.448796 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/badeb3df-9c56-4aa2-af6f-aba14c213fcc-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.448815 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcsqr\" (UniqueName: \"kubernetes.io/projected/badeb3df-9c56-4aa2-af6f-aba14c213fcc-kube-api-access-tcsqr\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.543375 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d44c8b88d-jx574"] Mar 12 13:15:14 crc kubenswrapper[4778]: I0312 13:15:14.547358 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d44c8b88d-jx574"] Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.232648 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.232608 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5" event={"ID":"badeb3df-9c56-4aa2-af6f-aba14c213fcc","Type":"ContainerDied","Data":"ad682c14c40bcfdf45bab0f4aae014cbaeacf6b49ffe857ff368861fb7bbc412"} Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.232913 4778 scope.go:117] "RemoveContainer" containerID="6edfc1174eae36c4699c23b09d94a6801a70e404d52b1e50d4350988d1f6d371" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.263451 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5"] Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.272531 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f4dd5cc6-ppsx5"] Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.652529 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w"] Mar 12 13:15:15 crc kubenswrapper[4778]: E0312 13:15:15.652976 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="badeb3df-9c56-4aa2-af6f-aba14c213fcc" containerName="route-controller-manager" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.652987 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="badeb3df-9c56-4aa2-af6f-aba14c213fcc" containerName="route-controller-manager" Mar 12 13:15:15 crc kubenswrapper[4778]: E0312 13:15:15.653007 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdd5690-0e80-4317-9e3a-8478f09ea1a8" containerName="controller-manager" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653013 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdd5690-0e80-4317-9e3a-8478f09ea1a8" containerName="controller-manager" Mar 12 13:15:15 crc kubenswrapper[4778]: E0312 13:15:15.653022 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653029 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 13:15:15 crc kubenswrapper[4778]: E0312 13:15:15.653037 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6197b3a9-f02f-4e5d-8196-b617fffa467d" containerName="collect-profiles" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653043 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="6197b3a9-f02f-4e5d-8196-b617fffa467d" containerName="collect-profiles" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653151 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="6197b3a9-f02f-4e5d-8196-b617fffa467d" containerName="collect-profiles" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653166 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653173 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdd5690-0e80-4317-9e3a-8478f09ea1a8" containerName="controller-manager" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653196 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="badeb3df-9c56-4aa2-af6f-aba14c213fcc" containerName="route-controller-manager" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653469 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt"] Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.653922 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.654282 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.659972 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.661787 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.665477 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.665825 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.666046 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.668818 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.668948 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.669635 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.669710 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.669908 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.670058 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.675053 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.684223 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.686065 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w"] Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.692726 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt"] Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765050 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-config\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765106 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctmx8\" (UniqueName: \"kubernetes.io/projected/e54ca63f-0568-4b0f-aa00-a726ead780cc-kube-api-access-ctmx8\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765131 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d6bd11-6c53-46a2-a38f-65672b1cc83f-serving-cert\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765147 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-proxy-ca-bundles\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765166 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54ca63f-0568-4b0f-aa00-a726ead780cc-serving-cert\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765266 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-client-ca\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765287 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-config\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765462 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-client-ca\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.765555 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgrrw\" (UniqueName: \"kubernetes.io/projected/97d6bd11-6c53-46a2-a38f-65672b1cc83f-kube-api-access-cgrrw\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866573 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-client-ca\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866632 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-config\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866660 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-client-ca\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866703 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgrrw\" (UniqueName: \"kubernetes.io/projected/97d6bd11-6c53-46a2-a38f-65672b1cc83f-kube-api-access-cgrrw\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866760 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-config\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866819 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctmx8\" (UniqueName: \"kubernetes.io/projected/e54ca63f-0568-4b0f-aa00-a726ead780cc-kube-api-access-ctmx8\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866854 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d6bd11-6c53-46a2-a38f-65672b1cc83f-serving-cert\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866907 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-proxy-ca-bundles\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.866937 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54ca63f-0568-4b0f-aa00-a726ead780cc-serving-cert\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.867811 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-client-ca\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.868275 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-config\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.868645 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-proxy-ca-bundles\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.868954 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-client-ca\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.869408 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-config\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.874416 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d6bd11-6c53-46a2-a38f-65672b1cc83f-serving-cert\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.874859 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54ca63f-0568-4b0f-aa00-a726ead780cc-serving-cert\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.884309 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctmx8\" (UniqueName: \"kubernetes.io/projected/e54ca63f-0568-4b0f-aa00-a726ead780cc-kube-api-access-ctmx8\") pod \"controller-manager-7cd7bcbb47-r52bt\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.885565 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgrrw\" (UniqueName: \"kubernetes.io/projected/97d6bd11-6c53-46a2-a38f-65672b1cc83f-kube-api-access-cgrrw\") pod \"route-controller-manager-5d7995cfc7-rwc4w\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.975620 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:15 crc kubenswrapper[4778]: I0312 13:15:15.986840 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:16 crc kubenswrapper[4778]: I0312 13:15:16.211760 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt"] Mar 12 13:15:16 crc kubenswrapper[4778]: W0312 13:15:16.215855 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode54ca63f_0568_4b0f_aa00_a726ead780cc.slice/crio-e1f5fdc0207e5f640c33e64d7efe86ecc6a21b6930d23ea0075c4c50accc4ee6 WatchSource:0}: Error finding container e1f5fdc0207e5f640c33e64d7efe86ecc6a21b6930d23ea0075c4c50accc4ee6: Status 404 returned error can't find the container with id e1f5fdc0207e5f640c33e64d7efe86ecc6a21b6930d23ea0075c4c50accc4ee6 Mar 12 13:15:16 crc kubenswrapper[4778]: I0312 13:15:16.249777 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" event={"ID":"e54ca63f-0568-4b0f-aa00-a726ead780cc","Type":"ContainerStarted","Data":"e1f5fdc0207e5f640c33e64d7efe86ecc6a21b6930d23ea0075c4c50accc4ee6"} Mar 12 13:15:16 crc kubenswrapper[4778]: W0312 13:15:16.256692 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97d6bd11_6c53_46a2_a38f_65672b1cc83f.slice/crio-5047efe3d4d74f46dcbcd8ccb18595521f1d36baca9c39a378fbb3fd6752f393 WatchSource:0}: Error finding container 5047efe3d4d74f46dcbcd8ccb18595521f1d36baca9c39a378fbb3fd6752f393: Status 404 returned error can't find the container with id 5047efe3d4d74f46dcbcd8ccb18595521f1d36baca9c39a378fbb3fd6752f393 Mar 12 13:15:16 crc kubenswrapper[4778]: I0312 13:15:16.259079 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdd5690-0e80-4317-9e3a-8478f09ea1a8" path="/var/lib/kubelet/pods/0fdd5690-0e80-4317-9e3a-8478f09ea1a8/volumes" Mar 12 13:15:16 crc kubenswrapper[4778]: I0312 13:15:16.260034 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="badeb3df-9c56-4aa2-af6f-aba14c213fcc" path="/var/lib/kubelet/pods/badeb3df-9c56-4aa2-af6f-aba14c213fcc/volumes" Mar 12 13:15:16 crc kubenswrapper[4778]: I0312 13:15:16.260558 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w"] Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.256913 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" event={"ID":"97d6bd11-6c53-46a2-a38f-65672b1cc83f","Type":"ContainerStarted","Data":"3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e"} Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.256952 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" event={"ID":"97d6bd11-6c53-46a2-a38f-65672b1cc83f","Type":"ContainerStarted","Data":"5047efe3d4d74f46dcbcd8ccb18595521f1d36baca9c39a378fbb3fd6752f393"} Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.258020 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.261654 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" event={"ID":"e54ca63f-0568-4b0f-aa00-a726ead780cc","Type":"ContainerStarted","Data":"32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386"} Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.262597 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.264017 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.268680 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.286845 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" podStartSLOduration=4.286826355 podStartE2EDuration="4.286826355s" podCreationTimestamp="2026-03-12 13:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:15:17.284648918 +0000 UTC m=+335.733344334" watchObservedRunningTime="2026-03-12 13:15:17.286826355 +0000 UTC m=+335.735521771" Mar 12 13:15:17 crc kubenswrapper[4778]: I0312 13:15:17.300467 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" podStartSLOduration=4.30044934 podStartE2EDuration="4.30044934s" podCreationTimestamp="2026-03-12 13:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:15:17.300298775 +0000 UTC m=+335.748994171" watchObservedRunningTime="2026-03-12 13:15:17.30044934 +0000 UTC m=+335.749144736" Mar 12 13:15:28 crc kubenswrapper[4778]: I0312 13:15:28.195149 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 12 13:15:49 crc kubenswrapper[4778]: I0312 13:15:49.528698 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt"] Mar 12 13:15:49 crc kubenswrapper[4778]: I0312 13:15:49.529440 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" podUID="e54ca63f-0568-4b0f-aa00-a726ead780cc" containerName="controller-manager" containerID="cri-o://32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386" gracePeriod=30 Mar 12 13:15:49 crc kubenswrapper[4778]: I0312 13:15:49.654008 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w"] Mar 12 13:15:49 crc kubenswrapper[4778]: I0312 13:15:49.654360 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" podUID="97d6bd11-6c53-46a2-a38f-65672b1cc83f" containerName="route-controller-manager" containerID="cri-o://3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e" gracePeriod=30 Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.019121 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.022816 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066146 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctmx8\" (UniqueName: \"kubernetes.io/projected/e54ca63f-0568-4b0f-aa00-a726ead780cc-kube-api-access-ctmx8\") pod \"e54ca63f-0568-4b0f-aa00-a726ead780cc\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066251 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d6bd11-6c53-46a2-a38f-65672b1cc83f-serving-cert\") pod \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066321 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgrrw\" (UniqueName: \"kubernetes.io/projected/97d6bd11-6c53-46a2-a38f-65672b1cc83f-kube-api-access-cgrrw\") pod \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066354 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-config\") pod \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066400 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54ca63f-0568-4b0f-aa00-a726ead780cc-serving-cert\") pod \"e54ca63f-0568-4b0f-aa00-a726ead780cc\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066428 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-client-ca\") pod \"e54ca63f-0568-4b0f-aa00-a726ead780cc\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066471 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-client-ca\") pod \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\" (UID: \"97d6bd11-6c53-46a2-a38f-65672b1cc83f\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066533 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-proxy-ca-bundles\") pod \"e54ca63f-0568-4b0f-aa00-a726ead780cc\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.066570 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-config\") pod \"e54ca63f-0568-4b0f-aa00-a726ead780cc\" (UID: \"e54ca63f-0568-4b0f-aa00-a726ead780cc\") " Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.067706 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-config" (OuterVolumeSpecName: "config") pod "e54ca63f-0568-4b0f-aa00-a726ead780cc" (UID: "e54ca63f-0568-4b0f-aa00-a726ead780cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.067967 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-client-ca" (OuterVolumeSpecName: "client-ca") pod "e54ca63f-0568-4b0f-aa00-a726ead780cc" (UID: "e54ca63f-0568-4b0f-aa00-a726ead780cc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.068334 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-client-ca" (OuterVolumeSpecName: "client-ca") pod "97d6bd11-6c53-46a2-a38f-65672b1cc83f" (UID: "97d6bd11-6c53-46a2-a38f-65672b1cc83f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.068729 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e54ca63f-0568-4b0f-aa00-a726ead780cc" (UID: "e54ca63f-0568-4b0f-aa00-a726ead780cc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.068898 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-config" (OuterVolumeSpecName: "config") pod "97d6bd11-6c53-46a2-a38f-65672b1cc83f" (UID: "97d6bd11-6c53-46a2-a38f-65672b1cc83f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.072030 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54ca63f-0568-4b0f-aa00-a726ead780cc-kube-api-access-ctmx8" (OuterVolumeSpecName: "kube-api-access-ctmx8") pod "e54ca63f-0568-4b0f-aa00-a726ead780cc" (UID: "e54ca63f-0568-4b0f-aa00-a726ead780cc"). InnerVolumeSpecName "kube-api-access-ctmx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.073471 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54ca63f-0568-4b0f-aa00-a726ead780cc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e54ca63f-0568-4b0f-aa00-a726ead780cc" (UID: "e54ca63f-0568-4b0f-aa00-a726ead780cc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.074237 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d6bd11-6c53-46a2-a38f-65672b1cc83f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97d6bd11-6c53-46a2-a38f-65672b1cc83f" (UID: "97d6bd11-6c53-46a2-a38f-65672b1cc83f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.074871 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d6bd11-6c53-46a2-a38f-65672b1cc83f-kube-api-access-cgrrw" (OuterVolumeSpecName: "kube-api-access-cgrrw") pod "97d6bd11-6c53-46a2-a38f-65672b1cc83f" (UID: "97d6bd11-6c53-46a2-a38f-65672b1cc83f"). InnerVolumeSpecName "kube-api-access-cgrrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167849 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167890 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167904 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctmx8\" (UniqueName: \"kubernetes.io/projected/e54ca63f-0568-4b0f-aa00-a726ead780cc-kube-api-access-ctmx8\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167914 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97d6bd11-6c53-46a2-a38f-65672b1cc83f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167925 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgrrw\" (UniqueName: \"kubernetes.io/projected/97d6bd11-6c53-46a2-a38f-65672b1cc83f-kube-api-access-cgrrw\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167933 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167942 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e54ca63f-0568-4b0f-aa00-a726ead780cc-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167951 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e54ca63f-0568-4b0f-aa00-a726ead780cc-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.167959 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97d6bd11-6c53-46a2-a38f-65672b1cc83f-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.446462 4778 generic.go:334] "Generic (PLEG): container finished" podID="e54ca63f-0568-4b0f-aa00-a726ead780cc" containerID="32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386" exitCode=0 Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.446564 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.447165 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" event={"ID":"e54ca63f-0568-4b0f-aa00-a726ead780cc","Type":"ContainerDied","Data":"32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386"} Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.447217 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt" event={"ID":"e54ca63f-0568-4b0f-aa00-a726ead780cc","Type":"ContainerDied","Data":"e1f5fdc0207e5f640c33e64d7efe86ecc6a21b6930d23ea0075c4c50accc4ee6"} Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.447251 4778 scope.go:117] "RemoveContainer" containerID="32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.449912 4778 generic.go:334] "Generic (PLEG): container finished" podID="97d6bd11-6c53-46a2-a38f-65672b1cc83f" containerID="3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e" exitCode=0 Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.449945 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" event={"ID":"97d6bd11-6c53-46a2-a38f-65672b1cc83f","Type":"ContainerDied","Data":"3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e"} Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.449968 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" event={"ID":"97d6bd11-6c53-46a2-a38f-65672b1cc83f","Type":"ContainerDied","Data":"5047efe3d4d74f46dcbcd8ccb18595521f1d36baca9c39a378fbb3fd6752f393"} Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.450015 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.489910 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt"] Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.494935 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cd7bcbb47-r52bt"] Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.504621 4778 scope.go:117] "RemoveContainer" containerID="32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386" Mar 12 13:15:50 crc kubenswrapper[4778]: E0312 13:15:50.505080 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386\": container with ID starting with 32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386 not found: ID does not exist" containerID="32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.505129 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386"} err="failed to get container status \"32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386\": rpc error: code = NotFound desc = could not find container \"32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386\": container with ID starting with 32e23f7128debfc4cc11679362c8383eacd2674f3f8f68c5993038e873d31386 not found: ID does not exist" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.505162 4778 scope.go:117] "RemoveContainer" containerID="3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.514297 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w"] Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.515059 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d7995cfc7-rwc4w"] Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.526598 4778 scope.go:117] "RemoveContainer" containerID="3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e" Mar 12 13:15:50 crc kubenswrapper[4778]: E0312 13:15:50.527009 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e\": container with ID starting with 3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e not found: ID does not exist" containerID="3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.527053 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e"} err="failed to get container status \"3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e\": rpc error: code = NotFound desc = could not find container \"3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e\": container with ID starting with 3f3381b2bee5e33ad3fd2e7b670b2802903ebb1e5d72d9179b8c4f23bd13d27e not found: ID does not exist" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.676270 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6"] Mar 12 13:15:50 crc kubenswrapper[4778]: E0312 13:15:50.676640 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d6bd11-6c53-46a2-a38f-65672b1cc83f" containerName="route-controller-manager" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.676671 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d6bd11-6c53-46a2-a38f-65672b1cc83f" containerName="route-controller-manager" Mar 12 13:15:50 crc kubenswrapper[4778]: E0312 13:15:50.676726 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54ca63f-0568-4b0f-aa00-a726ead780cc" containerName="controller-manager" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.676743 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54ca63f-0568-4b0f-aa00-a726ead780cc" containerName="controller-manager" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.676962 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d6bd11-6c53-46a2-a38f-65672b1cc83f" containerName="route-controller-manager" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.676989 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54ca63f-0568-4b0f-aa00-a726ead780cc" containerName="controller-manager" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.677922 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.680493 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.680592 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.680980 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.681091 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.682065 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.684989 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.685910 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6"] Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.689938 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.776362 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8083240-e16c-40da-9f87-9db45cebfafd-serving-cert\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.776459 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-proxy-ca-bundles\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.776500 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jktc4\" (UniqueName: \"kubernetes.io/projected/c8083240-e16c-40da-9f87-9db45cebfafd-kube-api-access-jktc4\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.776530 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-config\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.776606 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-client-ca\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.877674 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-proxy-ca-bundles\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.877728 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jktc4\" (UniqueName: \"kubernetes.io/projected/c8083240-e16c-40da-9f87-9db45cebfafd-kube-api-access-jktc4\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.877752 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-config\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.877787 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-client-ca\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.877830 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8083240-e16c-40da-9f87-9db45cebfafd-serving-cert\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.878942 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-client-ca\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.879134 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-config\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.879856 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-proxy-ca-bundles\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.883946 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8083240-e16c-40da-9f87-9db45cebfafd-serving-cert\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.895838 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jktc4\" (UniqueName: \"kubernetes.io/projected/c8083240-e16c-40da-9f87-9db45cebfafd-kube-api-access-jktc4\") pod \"controller-manager-f8c4b6bf8-n4jk6\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:50 crc kubenswrapper[4778]: I0312 13:15:50.991693 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.406977 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6"] Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.461472 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" event={"ID":"c8083240-e16c-40da-9f87-9db45cebfafd","Type":"ContainerStarted","Data":"610f9e9935be3de3cd47d5e81cba88f5a4f3f12d0d389042b1b5255df2fa7476"} Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.689807 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r"] Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.690721 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.692738 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.692780 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.692784 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.695080 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r"] Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.695558 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.695759 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.695778 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.792339 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6eebe99-8b82-4720-82c5-c940100859ad-serving-cert\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.792406 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-client-ca\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.792475 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-config\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.792506 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc82s\" (UniqueName: \"kubernetes.io/projected/f6eebe99-8b82-4720-82c5-c940100859ad-kube-api-access-qc82s\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.893775 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6eebe99-8b82-4720-82c5-c940100859ad-serving-cert\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.893825 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-client-ca\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.893875 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-config\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.893903 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc82s\" (UniqueName: \"kubernetes.io/projected/f6eebe99-8b82-4720-82c5-c940100859ad-kube-api-access-qc82s\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.894785 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-client-ca\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.895310 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-config\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.898773 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6eebe99-8b82-4720-82c5-c940100859ad-serving-cert\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:51 crc kubenswrapper[4778]: I0312 13:15:51.916679 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc82s\" (UniqueName: \"kubernetes.io/projected/f6eebe99-8b82-4720-82c5-c940100859ad-kube-api-access-qc82s\") pod \"route-controller-manager-6d496c4846-ggb6r\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.052879 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.268487 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d6bd11-6c53-46a2-a38f-65672b1cc83f" path="/var/lib/kubelet/pods/97d6bd11-6c53-46a2-a38f-65672b1cc83f/volumes" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.269725 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54ca63f-0568-4b0f-aa00-a726ead780cc" path="/var/lib/kubelet/pods/e54ca63f-0568-4b0f-aa00-a726ead780cc/volumes" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.270420 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r"] Mar 12 13:15:52 crc kubenswrapper[4778]: W0312 13:15:52.270344 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6eebe99_8b82_4720_82c5_c940100859ad.slice/crio-ef118b4d4b8450e0cd8d72c97dec0a0efe275f77e1d6ae4557340cc05a492700 WatchSource:0}: Error finding container ef118b4d4b8450e0cd8d72c97dec0a0efe275f77e1d6ae4557340cc05a492700: Status 404 returned error can't find the container with id ef118b4d4b8450e0cd8d72c97dec0a0efe275f77e1d6ae4557340cc05a492700 Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.467241 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" event={"ID":"f6eebe99-8b82-4720-82c5-c940100859ad","Type":"ContainerStarted","Data":"5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb"} Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.467278 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" event={"ID":"f6eebe99-8b82-4720-82c5-c940100859ad","Type":"ContainerStarted","Data":"ef118b4d4b8450e0cd8d72c97dec0a0efe275f77e1d6ae4557340cc05a492700"} Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.468156 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.470507 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" event={"ID":"c8083240-e16c-40da-9f87-9db45cebfafd","Type":"ContainerStarted","Data":"e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68"} Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.471079 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.475170 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.491410 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" podStartSLOduration=3.4913423740000002 podStartE2EDuration="3.491342374s" podCreationTimestamp="2026-03-12 13:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:15:52.487304711 +0000 UTC m=+370.936000107" watchObservedRunningTime="2026-03-12 13:15:52.491342374 +0000 UTC m=+370.940037780" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.508367 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" podStartSLOduration=3.5083451820000002 podStartE2EDuration="3.508345182s" podCreationTimestamp="2026-03-12 13:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:15:52.503496104 +0000 UTC m=+370.952191500" watchObservedRunningTime="2026-03-12 13:15:52.508345182 +0000 UTC m=+370.957040578" Mar 12 13:15:52 crc kubenswrapper[4778]: I0312 13:15:52.811264 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:55 crc kubenswrapper[4778]: I0312 13:15:55.547997 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6"] Mar 12 13:15:55 crc kubenswrapper[4778]: I0312 13:15:55.548594 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" podUID="c8083240-e16c-40da-9f87-9db45cebfafd" containerName="controller-manager" containerID="cri-o://e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68" gracePeriod=30 Mar 12 13:15:55 crc kubenswrapper[4778]: I0312 13:15:55.556126 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r"] Mar 12 13:15:55 crc kubenswrapper[4778]: I0312 13:15:55.556355 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" podUID="f6eebe99-8b82-4720-82c5-c940100859ad" containerName="route-controller-manager" containerID="cri-o://5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb" gracePeriod=30 Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.060640 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.097105 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.146693 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc82s\" (UniqueName: \"kubernetes.io/projected/f6eebe99-8b82-4720-82c5-c940100859ad-kube-api-access-qc82s\") pod \"f6eebe99-8b82-4720-82c5-c940100859ad\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.146785 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-config\") pod \"f6eebe99-8b82-4720-82c5-c940100859ad\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.146852 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8083240-e16c-40da-9f87-9db45cebfafd-serving-cert\") pod \"c8083240-e16c-40da-9f87-9db45cebfafd\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.146884 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-client-ca\") pod \"f6eebe99-8b82-4720-82c5-c940100859ad\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.146898 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6eebe99-8b82-4720-82c5-c940100859ad-serving-cert\") pod \"f6eebe99-8b82-4720-82c5-c940100859ad\" (UID: \"f6eebe99-8b82-4720-82c5-c940100859ad\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.146932 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-proxy-ca-bundles\") pod \"c8083240-e16c-40da-9f87-9db45cebfafd\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.146953 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-config\") pod \"c8083240-e16c-40da-9f87-9db45cebfafd\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.146997 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-client-ca\") pod \"c8083240-e16c-40da-9f87-9db45cebfafd\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.147019 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jktc4\" (UniqueName: \"kubernetes.io/projected/c8083240-e16c-40da-9f87-9db45cebfafd-kube-api-access-jktc4\") pod \"c8083240-e16c-40da-9f87-9db45cebfafd\" (UID: \"c8083240-e16c-40da-9f87-9db45cebfafd\") " Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.147500 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6eebe99-8b82-4720-82c5-c940100859ad" (UID: "f6eebe99-8b82-4720-82c5-c940100859ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.147552 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-config" (OuterVolumeSpecName: "config") pod "f6eebe99-8b82-4720-82c5-c940100859ad" (UID: "f6eebe99-8b82-4720-82c5-c940100859ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.147820 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-config" (OuterVolumeSpecName: "config") pod "c8083240-e16c-40da-9f87-9db45cebfafd" (UID: "c8083240-e16c-40da-9f87-9db45cebfafd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.147970 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-client-ca" (OuterVolumeSpecName: "client-ca") pod "c8083240-e16c-40da-9f87-9db45cebfafd" (UID: "c8083240-e16c-40da-9f87-9db45cebfafd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.149336 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c8083240-e16c-40da-9f87-9db45cebfafd" (UID: "c8083240-e16c-40da-9f87-9db45cebfafd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.151602 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.152460 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.152471 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6eebe99-8b82-4720-82c5-c940100859ad-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.152479 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.152491 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8083240-e16c-40da-9f87-9db45cebfafd-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.151784 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8083240-e16c-40da-9f87-9db45cebfafd-kube-api-access-jktc4" (OuterVolumeSpecName: "kube-api-access-jktc4") pod "c8083240-e16c-40da-9f87-9db45cebfafd" (UID: "c8083240-e16c-40da-9f87-9db45cebfafd"). InnerVolumeSpecName "kube-api-access-jktc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.151802 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6eebe99-8b82-4720-82c5-c940100859ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6eebe99-8b82-4720-82c5-c940100859ad" (UID: "f6eebe99-8b82-4720-82c5-c940100859ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.151834 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6eebe99-8b82-4720-82c5-c940100859ad-kube-api-access-qc82s" (OuterVolumeSpecName: "kube-api-access-qc82s") pod "f6eebe99-8b82-4720-82c5-c940100859ad" (UID: "f6eebe99-8b82-4720-82c5-c940100859ad"). InnerVolumeSpecName "kube-api-access-qc82s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.151973 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8083240-e16c-40da-9f87-9db45cebfafd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c8083240-e16c-40da-9f87-9db45cebfafd" (UID: "c8083240-e16c-40da-9f87-9db45cebfafd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.254060 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6eebe99-8b82-4720-82c5-c940100859ad-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.254098 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jktc4\" (UniqueName: \"kubernetes.io/projected/c8083240-e16c-40da-9f87-9db45cebfafd-kube-api-access-jktc4\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.254109 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc82s\" (UniqueName: \"kubernetes.io/projected/f6eebe99-8b82-4720-82c5-c940100859ad-kube-api-access-qc82s\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.254117 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8083240-e16c-40da-9f87-9db45cebfafd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.499164 4778 generic.go:334] "Generic (PLEG): container finished" podID="f6eebe99-8b82-4720-82c5-c940100859ad" containerID="5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb" exitCode=0 Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.499268 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" event={"ID":"f6eebe99-8b82-4720-82c5-c940100859ad","Type":"ContainerDied","Data":"5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb"} Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.499284 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.499307 4778 scope.go:117] "RemoveContainer" containerID="5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.499296 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r" event={"ID":"f6eebe99-8b82-4720-82c5-c940100859ad","Type":"ContainerDied","Data":"ef118b4d4b8450e0cd8d72c97dec0a0efe275f77e1d6ae4557340cc05a492700"} Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.502010 4778 generic.go:334] "Generic (PLEG): container finished" podID="c8083240-e16c-40da-9f87-9db45cebfafd" containerID="e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68" exitCode=0 Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.502031 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" event={"ID":"c8083240-e16c-40da-9f87-9db45cebfafd","Type":"ContainerDied","Data":"e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68"} Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.502045 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" event={"ID":"c8083240-e16c-40da-9f87-9db45cebfafd","Type":"ContainerDied","Data":"610f9e9935be3de3cd47d5e81cba88f5a4f3f12d0d389042b1b5255df2fa7476"} Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.502101 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.524908 4778 scope.go:117] "RemoveContainer" containerID="5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb" Mar 12 13:15:56 crc kubenswrapper[4778]: E0312 13:15:56.525507 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb\": container with ID starting with 5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb not found: ID does not exist" containerID="5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.525558 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb"} err="failed to get container status \"5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb\": rpc error: code = NotFound desc = could not find container \"5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb\": container with ID starting with 5e8409c01768716b0eb390d5602b10ae0c1d9381bd08aae1e8d64b1f9635a1eb not found: ID does not exist" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.525593 4778 scope.go:117] "RemoveContainer" containerID="e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.527206 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6"] Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.534298 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-n4jk6"] Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.537805 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r"] Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.541515 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-ggb6r"] Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.549560 4778 scope.go:117] "RemoveContainer" containerID="e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68" Mar 12 13:15:56 crc kubenswrapper[4778]: E0312 13:15:56.549988 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68\": container with ID starting with e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68 not found: ID does not exist" containerID="e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.550032 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68"} err="failed to get container status \"e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68\": rpc error: code = NotFound desc = could not find container \"e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68\": container with ID starting with e41fa41e2ced748ad127ed8555daf5859b90c9e62957331490bce01447075c68 not found: ID does not exist" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.681573 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f558b8664-wcwww"] Mar 12 13:15:56 crc kubenswrapper[4778]: E0312 13:15:56.682021 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6eebe99-8b82-4720-82c5-c940100859ad" containerName="route-controller-manager" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.682047 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6eebe99-8b82-4720-82c5-c940100859ad" containerName="route-controller-manager" Mar 12 13:15:56 crc kubenswrapper[4778]: E0312 13:15:56.682074 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8083240-e16c-40da-9f87-9db45cebfafd" containerName="controller-manager" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.682091 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8083240-e16c-40da-9f87-9db45cebfafd" containerName="controller-manager" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.682335 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6eebe99-8b82-4720-82c5-c940100859ad" containerName="route-controller-manager" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.682374 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8083240-e16c-40da-9f87-9db45cebfafd" containerName="controller-manager" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.683242 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.687170 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.688939 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.699043 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.699282 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.699478 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.700113 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.700891 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.709622 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n"] Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.711239 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.715860 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.715912 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.716521 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.716533 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.716879 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.716931 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.718746 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n"] Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.730418 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f558b8664-wcwww"] Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761414 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-config\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761491 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-config\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761524 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-serving-cert\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761592 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-proxy-ca-bundles\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761618 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f2c27d-f47b-4bcb-81af-749dd8f6d053-serving-cert\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761641 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm445\" (UniqueName: \"kubernetes.io/projected/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-kube-api-access-sm445\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761667 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnhvm\" (UniqueName: \"kubernetes.io/projected/81f2c27d-f47b-4bcb-81af-749dd8f6d053-kube-api-access-vnhvm\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761698 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-client-ca\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.761720 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-client-ca\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.862513 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-config\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.862610 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-serving-cert\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.862690 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-proxy-ca-bundles\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.862740 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f2c27d-f47b-4bcb-81af-749dd8f6d053-serving-cert\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.862797 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm445\" (UniqueName: \"kubernetes.io/projected/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-kube-api-access-sm445\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.862857 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnhvm\" (UniqueName: \"kubernetes.io/projected/81f2c27d-f47b-4bcb-81af-749dd8f6d053-kube-api-access-vnhvm\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.862921 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-client-ca\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.862970 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-client-ca\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.863082 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-config\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.864776 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-client-ca\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.864992 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-client-ca\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.865438 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-config\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.865710 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-config\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.866347 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-proxy-ca-bundles\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.868634 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-serving-cert\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.870545 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f2c27d-f47b-4bcb-81af-749dd8f6d053-serving-cert\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.886377 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm445\" (UniqueName: \"kubernetes.io/projected/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-kube-api-access-sm445\") pod \"route-controller-manager-7864ddbcd6-8t87n\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:56 crc kubenswrapper[4778]: I0312 13:15:56.886820 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnhvm\" (UniqueName: \"kubernetes.io/projected/81f2c27d-f47b-4bcb-81af-749dd8f6d053-kube-api-access-vnhvm\") pod \"controller-manager-7f558b8664-wcwww\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:57 crc kubenswrapper[4778]: I0312 13:15:57.033717 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:57 crc kubenswrapper[4778]: I0312 13:15:57.041823 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:57 crc kubenswrapper[4778]: I0312 13:15:57.466003 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f558b8664-wcwww"] Mar 12 13:15:57 crc kubenswrapper[4778]: I0312 13:15:57.511303 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n"] Mar 12 13:15:57 crc kubenswrapper[4778]: W0312 13:15:57.513904 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03ce725f_d022_4fe6_9fd8_d61f4bec2ad6.slice/crio-f2dfa420c929336f07fb954275af02bba9b37d1b1afb3796a9bd8590cd2100e8 WatchSource:0}: Error finding container f2dfa420c929336f07fb954275af02bba9b37d1b1afb3796a9bd8590cd2100e8: Status 404 returned error can't find the container with id f2dfa420c929336f07fb954275af02bba9b37d1b1afb3796a9bd8590cd2100e8 Mar 12 13:15:57 crc kubenswrapper[4778]: I0312 13:15:57.519422 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" event={"ID":"81f2c27d-f47b-4bcb-81af-749dd8f6d053","Type":"ContainerStarted","Data":"6d515fcf6020e53c6189e977403cfe43fcda12d2f5abd6575280ee4b45363384"} Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.264622 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8083240-e16c-40da-9f87-9db45cebfafd" path="/var/lib/kubelet/pods/c8083240-e16c-40da-9f87-9db45cebfafd/volumes" Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.265781 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6eebe99-8b82-4720-82c5-c940100859ad" path="/var/lib/kubelet/pods/f6eebe99-8b82-4720-82c5-c940100859ad/volumes" Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.526891 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" event={"ID":"81f2c27d-f47b-4bcb-81af-749dd8f6d053","Type":"ContainerStarted","Data":"f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783"} Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.527319 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.528958 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" event={"ID":"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6","Type":"ContainerStarted","Data":"2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe"} Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.529047 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" event={"ID":"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6","Type":"ContainerStarted","Data":"f2dfa420c929336f07fb954275af02bba9b37d1b1afb3796a9bd8590cd2100e8"} Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.529176 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.533391 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.534291 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.549080 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" podStartSLOduration=3.549055172 podStartE2EDuration="3.549055172s" podCreationTimestamp="2026-03-12 13:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:15:58.545585396 +0000 UTC m=+376.994280802" watchObservedRunningTime="2026-03-12 13:15:58.549055172 +0000 UTC m=+376.997750568" Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.557993 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.558089 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:15:58 crc kubenswrapper[4778]: I0312 13:15:58.573252 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" podStartSLOduration=3.573230929 podStartE2EDuration="3.573230929s" podCreationTimestamp="2026-03-12 13:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:15:58.572441195 +0000 UTC m=+377.021136591" watchObservedRunningTime="2026-03-12 13:15:58.573230929 +0000 UTC m=+377.021926325" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.137674 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555356-cdmcz"] Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.138359 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555356-cdmcz" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.144717 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.144886 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.147446 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.151506 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555356-cdmcz"] Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.206787 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8btl\" (UniqueName: \"kubernetes.io/projected/c792e81a-8273-49a7-be95-c8c19cd2785b-kube-api-access-v8btl\") pod \"auto-csr-approver-29555356-cdmcz\" (UID: \"c792e81a-8273-49a7-be95-c8c19cd2785b\") " pod="openshift-infra/auto-csr-approver-29555356-cdmcz" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.308095 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8btl\" (UniqueName: \"kubernetes.io/projected/c792e81a-8273-49a7-be95-c8c19cd2785b-kube-api-access-v8btl\") pod \"auto-csr-approver-29555356-cdmcz\" (UID: \"c792e81a-8273-49a7-be95-c8c19cd2785b\") " pod="openshift-infra/auto-csr-approver-29555356-cdmcz" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.328114 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8btl\" (UniqueName: \"kubernetes.io/projected/c792e81a-8273-49a7-be95-c8c19cd2785b-kube-api-access-v8btl\") pod \"auto-csr-approver-29555356-cdmcz\" (UID: \"c792e81a-8273-49a7-be95-c8c19cd2785b\") " pod="openshift-infra/auto-csr-approver-29555356-cdmcz" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.461744 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555356-cdmcz" Mar 12 13:16:00 crc kubenswrapper[4778]: I0312 13:16:00.872550 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555356-cdmcz"] Mar 12 13:16:00 crc kubenswrapper[4778]: W0312 13:16:00.889405 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc792e81a_8273_49a7_be95_c8c19cd2785b.slice/crio-0325104c586ee885bb03aa8d7f2350ccf7dd9664fad4b2303bbd38426cc6f204 WatchSource:0}: Error finding container 0325104c586ee885bb03aa8d7f2350ccf7dd9664fad4b2303bbd38426cc6f204: Status 404 returned error can't find the container with id 0325104c586ee885bb03aa8d7f2350ccf7dd9664fad4b2303bbd38426cc6f204 Mar 12 13:16:01 crc kubenswrapper[4778]: I0312 13:16:01.548388 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555356-cdmcz" event={"ID":"c792e81a-8273-49a7-be95-c8c19cd2785b","Type":"ContainerStarted","Data":"0325104c586ee885bb03aa8d7f2350ccf7dd9664fad4b2303bbd38426cc6f204"} Mar 12 13:16:03 crc kubenswrapper[4778]: I0312 13:16:03.559158 4778 generic.go:334] "Generic (PLEG): container finished" podID="c792e81a-8273-49a7-be95-c8c19cd2785b" containerID="b6d55e4553c4a90b5714d39c88d9e361c3f3109a89cdbda1980233a5b1fade38" exitCode=0 Mar 12 13:16:03 crc kubenswrapper[4778]: I0312 13:16:03.559219 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555356-cdmcz" event={"ID":"c792e81a-8273-49a7-be95-c8c19cd2785b","Type":"ContainerDied","Data":"b6d55e4553c4a90b5714d39c88d9e361c3f3109a89cdbda1980233a5b1fade38"} Mar 12 13:16:04 crc kubenswrapper[4778]: I0312 13:16:04.879095 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555356-cdmcz" Mar 12 13:16:05 crc kubenswrapper[4778]: I0312 13:16:05.068035 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8btl\" (UniqueName: \"kubernetes.io/projected/c792e81a-8273-49a7-be95-c8c19cd2785b-kube-api-access-v8btl\") pod \"c792e81a-8273-49a7-be95-c8c19cd2785b\" (UID: \"c792e81a-8273-49a7-be95-c8c19cd2785b\") " Mar 12 13:16:05 crc kubenswrapper[4778]: I0312 13:16:05.072640 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c792e81a-8273-49a7-be95-c8c19cd2785b-kube-api-access-v8btl" (OuterVolumeSpecName: "kube-api-access-v8btl") pod "c792e81a-8273-49a7-be95-c8c19cd2785b" (UID: "c792e81a-8273-49a7-be95-c8c19cd2785b"). InnerVolumeSpecName "kube-api-access-v8btl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:16:05 crc kubenswrapper[4778]: I0312 13:16:05.169578 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8btl\" (UniqueName: \"kubernetes.io/projected/c792e81a-8273-49a7-be95-c8c19cd2785b-kube-api-access-v8btl\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:05 crc kubenswrapper[4778]: I0312 13:16:05.574763 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555356-cdmcz" event={"ID":"c792e81a-8273-49a7-be95-c8c19cd2785b","Type":"ContainerDied","Data":"0325104c586ee885bb03aa8d7f2350ccf7dd9664fad4b2303bbd38426cc6f204"} Mar 12 13:16:05 crc kubenswrapper[4778]: I0312 13:16:05.574814 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0325104c586ee885bb03aa8d7f2350ccf7dd9664fad4b2303bbd38426cc6f204" Mar 12 13:16:05 crc kubenswrapper[4778]: I0312 13:16:05.574904 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555356-cdmcz" Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.029339 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l8n9b"] Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.031347 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l8n9b" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerName="registry-server" containerID="cri-o://3686a4e289950327029466c928723a8314f5dcaa797637ff0db63d9aa4aeb5db" gracePeriod=2 Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.225938 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtjz5"] Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.226438 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rtjz5" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerName="registry-server" containerID="cri-o://3151ddc8cb64182fd7ccd241e4580f2e0243328e43f1e59366f60b980b160490" gracePeriod=2 Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.664662 4778 generic.go:334] "Generic (PLEG): container finished" podID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerID="3151ddc8cb64182fd7ccd241e4580f2e0243328e43f1e59366f60b980b160490" exitCode=0 Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.664754 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtjz5" event={"ID":"b9bef112-9bef-4ce2-abd8-054b4d671658","Type":"ContainerDied","Data":"3151ddc8cb64182fd7ccd241e4580f2e0243328e43f1e59366f60b980b160490"} Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.670468 4778 generic.go:334] "Generic (PLEG): container finished" podID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerID="3686a4e289950327029466c928723a8314f5dcaa797637ff0db63d9aa4aeb5db" exitCode=0 Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.670513 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8n9b" event={"ID":"c27afe2a-3402-49f9-b985-45fe67e40d22","Type":"ContainerDied","Data":"3686a4e289950327029466c928723a8314f5dcaa797637ff0db63d9aa4aeb5db"} Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.816757 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.944135 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjm7s\" (UniqueName: \"kubernetes.io/projected/b9bef112-9bef-4ce2-abd8-054b4d671658-kube-api-access-gjm7s\") pod \"b9bef112-9bef-4ce2-abd8-054b4d671658\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.944210 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-utilities\") pod \"b9bef112-9bef-4ce2-abd8-054b4d671658\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.944277 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-catalog-content\") pod \"b9bef112-9bef-4ce2-abd8-054b4d671658\" (UID: \"b9bef112-9bef-4ce2-abd8-054b4d671658\") " Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.945583 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-utilities" (OuterVolumeSpecName: "utilities") pod "b9bef112-9bef-4ce2-abd8-054b4d671658" (UID: "b9bef112-9bef-4ce2-abd8-054b4d671658"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:16:19 crc kubenswrapper[4778]: I0312 13:16:19.978382 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bef112-9bef-4ce2-abd8-054b4d671658-kube-api-access-gjm7s" (OuterVolumeSpecName: "kube-api-access-gjm7s") pod "b9bef112-9bef-4ce2-abd8-054b4d671658" (UID: "b9bef112-9bef-4ce2-abd8-054b4d671658"). InnerVolumeSpecName "kube-api-access-gjm7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.002046 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9bef112-9bef-4ce2-abd8-054b4d671658" (UID: "b9bef112-9bef-4ce2-abd8-054b4d671658"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.045540 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.045574 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjm7s\" (UniqueName: \"kubernetes.io/projected/b9bef112-9bef-4ce2-abd8-054b4d671658-kube-api-access-gjm7s\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.045588 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bef112-9bef-4ce2-abd8-054b4d671658-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.174599 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.349419 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxz76\" (UniqueName: \"kubernetes.io/projected/c27afe2a-3402-49f9-b985-45fe67e40d22-kube-api-access-dxz76\") pod \"c27afe2a-3402-49f9-b985-45fe67e40d22\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.349546 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-utilities\") pod \"c27afe2a-3402-49f9-b985-45fe67e40d22\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.349626 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-catalog-content\") pod \"c27afe2a-3402-49f9-b985-45fe67e40d22\" (UID: \"c27afe2a-3402-49f9-b985-45fe67e40d22\") " Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.350362 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-utilities" (OuterVolumeSpecName: "utilities") pod "c27afe2a-3402-49f9-b985-45fe67e40d22" (UID: "c27afe2a-3402-49f9-b985-45fe67e40d22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.352600 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27afe2a-3402-49f9-b985-45fe67e40d22-kube-api-access-dxz76" (OuterVolumeSpecName: "kube-api-access-dxz76") pod "c27afe2a-3402-49f9-b985-45fe67e40d22" (UID: "c27afe2a-3402-49f9-b985-45fe67e40d22"). InnerVolumeSpecName "kube-api-access-dxz76". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.412923 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c27afe2a-3402-49f9-b985-45fe67e40d22" (UID: "c27afe2a-3402-49f9-b985-45fe67e40d22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.451210 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxz76\" (UniqueName: \"kubernetes.io/projected/c27afe2a-3402-49f9-b985-45fe67e40d22-kube-api-access-dxz76\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.451254 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.451268 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27afe2a-3402-49f9-b985-45fe67e40d22-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.688498 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtjz5" event={"ID":"b9bef112-9bef-4ce2-abd8-054b4d671658","Type":"ContainerDied","Data":"823af2a7e3b6063a4f30d49b66161c625efcb36bf067f9d539324e41889ea011"} Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.688553 4778 scope.go:117] "RemoveContainer" containerID="3151ddc8cb64182fd7ccd241e4580f2e0243328e43f1e59366f60b980b160490" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.688705 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtjz5" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.697080 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8n9b" event={"ID":"c27afe2a-3402-49f9-b985-45fe67e40d22","Type":"ContainerDied","Data":"f3e464dc52992fdb0f0b53c632c09c98afcb767da1a2f76ffc34b25c53dcb6a3"} Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.697207 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8n9b" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.711861 4778 scope.go:117] "RemoveContainer" containerID="44212f253b9d8de159bf039fe64dd134b5f7beb71943da6aab7d4efc080466b3" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.718294 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtjz5"] Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.723487 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtjz5"] Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.724105 4778 scope.go:117] "RemoveContainer" containerID="3ee91beb1526d7d2135a66716b66577b22ca3756c6f18236717330ab9060a779" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.736230 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l8n9b"] Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.740014 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l8n9b"] Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.743927 4778 scope.go:117] "RemoveContainer" containerID="3686a4e289950327029466c928723a8314f5dcaa797637ff0db63d9aa4aeb5db" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.767964 4778 scope.go:117] "RemoveContainer" containerID="517c2af638efb950196e9ef53f4578b28c6c02cc9d241b33a72ede0303af599d" Mar 12 13:16:20 crc kubenswrapper[4778]: I0312 13:16:20.781068 4778 scope.go:117] "RemoveContainer" containerID="beac9341cf9caf9b2899c0d3555998167e4413386821c255145cfe1b113c1402" Mar 12 13:16:21 crc kubenswrapper[4778]: I0312 13:16:21.627057 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sjk9p"] Mar 12 13:16:21 crc kubenswrapper[4778]: I0312 13:16:21.628016 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sjk9p" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerName="registry-server" containerID="cri-o://7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06" gracePeriod=2 Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.262666 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" path="/var/lib/kubelet/pods/b9bef112-9bef-4ce2-abd8-054b4d671658/volumes" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.264152 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" path="/var/lib/kubelet/pods/c27afe2a-3402-49f9-b985-45fe67e40d22/volumes" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.670442 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.711677 4778 generic.go:334] "Generic (PLEG): container finished" podID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerID="7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06" exitCode=0 Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.711720 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjk9p" event={"ID":"3b3fb69e-dd4f-4787-a207-4fe25106f9e7","Type":"ContainerDied","Data":"7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06"} Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.711732 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sjk9p" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.711750 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sjk9p" event={"ID":"3b3fb69e-dd4f-4787-a207-4fe25106f9e7","Type":"ContainerDied","Data":"54d14e24e2014de0b1846a5aa684b84b3bf2783c8e0d47fb26e64cb9f10b0a8d"} Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.711771 4778 scope.go:117] "RemoveContainer" containerID="7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.745391 4778 scope.go:117] "RemoveContainer" containerID="fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.775775 4778 scope.go:117] "RemoveContainer" containerID="abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.789408 4778 scope.go:117] "RemoveContainer" containerID="7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06" Mar 12 13:16:22 crc kubenswrapper[4778]: E0312 13:16:22.791231 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06\": container with ID starting with 7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06 not found: ID does not exist" containerID="7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.791283 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06"} err="failed to get container status \"7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06\": rpc error: code = NotFound desc = could not find container \"7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06\": container with ID starting with 7a538b433370f97911f22dbc738a9c42cbd5e516b7acdb71010394ade11cee06 not found: ID does not exist" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.791315 4778 scope.go:117] "RemoveContainer" containerID="fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268" Mar 12 13:16:22 crc kubenswrapper[4778]: E0312 13:16:22.791702 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268\": container with ID starting with fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268 not found: ID does not exist" containerID="fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.791735 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268"} err="failed to get container status \"fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268\": rpc error: code = NotFound desc = could not find container \"fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268\": container with ID starting with fa00faf2580a0c0e9d72ea15f4cf1840ea1708c190198951e6018c60afdde268 not found: ID does not exist" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.791757 4778 scope.go:117] "RemoveContainer" containerID="abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961" Mar 12 13:16:22 crc kubenswrapper[4778]: E0312 13:16:22.792149 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961\": container with ID starting with abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961 not found: ID does not exist" containerID="abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.792227 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961"} err="failed to get container status \"abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961\": rpc error: code = NotFound desc = could not find container \"abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961\": container with ID starting with abeebebb9ab695d88020f3373974a8763b6d3a7633ca84c98e6d48516351c961 not found: ID does not exist" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.800597 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq7jg\" (UniqueName: \"kubernetes.io/projected/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-kube-api-access-dq7jg\") pod \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.800671 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-utilities\") pod \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.800719 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-catalog-content\") pod \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\" (UID: \"3b3fb69e-dd4f-4787-a207-4fe25106f9e7\") " Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.801703 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-utilities" (OuterVolumeSpecName: "utilities") pod "3b3fb69e-dd4f-4787-a207-4fe25106f9e7" (UID: "3b3fb69e-dd4f-4787-a207-4fe25106f9e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.809379 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-kube-api-access-dq7jg" (OuterVolumeSpecName: "kube-api-access-dq7jg") pod "3b3fb69e-dd4f-4787-a207-4fe25106f9e7" (UID: "3b3fb69e-dd4f-4787-a207-4fe25106f9e7"). InnerVolumeSpecName "kube-api-access-dq7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.850304 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b3fb69e-dd4f-4787-a207-4fe25106f9e7" (UID: "3b3fb69e-dd4f-4787-a207-4fe25106f9e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.901584 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.901618 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:22 crc kubenswrapper[4778]: I0312 13:16:22.901632 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq7jg\" (UniqueName: \"kubernetes.io/projected/3b3fb69e-dd4f-4787-a207-4fe25106f9e7-kube-api-access-dq7jg\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:23 crc kubenswrapper[4778]: I0312 13:16:23.051445 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sjk9p"] Mar 12 13:16:23 crc kubenswrapper[4778]: I0312 13:16:23.057082 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sjk9p"] Mar 12 13:16:24 crc kubenswrapper[4778]: I0312 13:16:24.260448 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" path="/var/lib/kubelet/pods/3b3fb69e-dd4f-4787-a207-4fe25106f9e7/volumes" Mar 12 13:16:28 crc kubenswrapper[4778]: I0312 13:16:28.557811 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:16:28 crc kubenswrapper[4778]: I0312 13:16:28.558104 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:16:33 crc kubenswrapper[4778]: I0312 13:16:33.896406 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f558b8664-wcwww"] Mar 12 13:16:33 crc kubenswrapper[4778]: I0312 13:16:33.896915 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" podUID="81f2c27d-f47b-4bcb-81af-749dd8f6d053" containerName="controller-manager" containerID="cri-o://f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783" gracePeriod=30 Mar 12 13:16:33 crc kubenswrapper[4778]: I0312 13:16:33.911871 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n"] Mar 12 13:16:33 crc kubenswrapper[4778]: I0312 13:16:33.912115 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" podUID="03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" containerName="route-controller-manager" containerID="cri-o://2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe" gracePeriod=30 Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.398364 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.500174 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.548037 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-client-ca\") pod \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.548109 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-config\") pod \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.548195 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-serving-cert\") pod \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.548250 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm445\" (UniqueName: \"kubernetes.io/projected/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-kube-api-access-sm445\") pod \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\" (UID: \"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.548887 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-client-ca" (OuterVolumeSpecName: "client-ca") pod "03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" (UID: "03ce725f-d022-4fe6-9fd8-d61f4bec2ad6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.548945 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-config" (OuterVolumeSpecName: "config") pod "03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" (UID: "03ce725f-d022-4fe6-9fd8-d61f4bec2ad6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.553353 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" (UID: "03ce725f-d022-4fe6-9fd8-d61f4bec2ad6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.553588 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-kube-api-access-sm445" (OuterVolumeSpecName: "kube-api-access-sm445") pod "03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" (UID: "03ce725f-d022-4fe6-9fd8-d61f4bec2ad6"). InnerVolumeSpecName "kube-api-access-sm445". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.649479 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-config\") pod \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.649579 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f2c27d-f47b-4bcb-81af-749dd8f6d053-serving-cert\") pod \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.649605 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnhvm\" (UniqueName: \"kubernetes.io/projected/81f2c27d-f47b-4bcb-81af-749dd8f6d053-kube-api-access-vnhvm\") pod \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.649645 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-client-ca\") pod \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.649766 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-proxy-ca-bundles\") pod \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\" (UID: \"81f2c27d-f47b-4bcb-81af-749dd8f6d053\") " Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.650017 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.650034 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm445\" (UniqueName: \"kubernetes.io/projected/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-kube-api-access-sm445\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.650047 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.650060 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.650419 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "81f2c27d-f47b-4bcb-81af-749dd8f6d053" (UID: "81f2c27d-f47b-4bcb-81af-749dd8f6d053"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.650453 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-config" (OuterVolumeSpecName: "config") pod "81f2c27d-f47b-4bcb-81af-749dd8f6d053" (UID: "81f2c27d-f47b-4bcb-81af-749dd8f6d053"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.650544 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-client-ca" (OuterVolumeSpecName: "client-ca") pod "81f2c27d-f47b-4bcb-81af-749dd8f6d053" (UID: "81f2c27d-f47b-4bcb-81af-749dd8f6d053"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.652703 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f2c27d-f47b-4bcb-81af-749dd8f6d053-kube-api-access-vnhvm" (OuterVolumeSpecName: "kube-api-access-vnhvm") pod "81f2c27d-f47b-4bcb-81af-749dd8f6d053" (UID: "81f2c27d-f47b-4bcb-81af-749dd8f6d053"). InnerVolumeSpecName "kube-api-access-vnhvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.653796 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f2c27d-f47b-4bcb-81af-749dd8f6d053-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81f2c27d-f47b-4bcb-81af-749dd8f6d053" (UID: "81f2c27d-f47b-4bcb-81af-749dd8f6d053"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.750810 4778 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.750853 4778 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f2c27d-f47b-4bcb-81af-749dd8f6d053-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.750868 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnhvm\" (UniqueName: \"kubernetes.io/projected/81f2c27d-f47b-4bcb-81af-749dd8f6d053-kube-api-access-vnhvm\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.750882 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.750893 4778 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81f2c27d-f47b-4bcb-81af-749dd8f6d053-client-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.808395 4778 generic.go:334] "Generic (PLEG): container finished" podID="03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" containerID="2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe" exitCode=0 Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.808468 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" event={"ID":"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6","Type":"ContainerDied","Data":"2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe"} Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.808494 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" event={"ID":"03ce725f-d022-4fe6-9fd8-d61f4bec2ad6","Type":"ContainerDied","Data":"f2dfa420c929336f07fb954275af02bba9b37d1b1afb3796a9bd8590cd2100e8"} Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.808512 4778 scope.go:117] "RemoveContainer" containerID="2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.808531 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.815270 4778 generic.go:334] "Generic (PLEG): container finished" podID="81f2c27d-f47b-4bcb-81af-749dd8f6d053" containerID="f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783" exitCode=0 Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.815332 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" event={"ID":"81f2c27d-f47b-4bcb-81af-749dd8f6d053","Type":"ContainerDied","Data":"f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783"} Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.815371 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" event={"ID":"81f2c27d-f47b-4bcb-81af-749dd8f6d053","Type":"ContainerDied","Data":"6d515fcf6020e53c6189e977403cfe43fcda12d2f5abd6575280ee4b45363384"} Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.815446 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f558b8664-wcwww" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.837356 4778 scope.go:117] "RemoveContainer" containerID="2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe" Mar 12 13:16:34 crc kubenswrapper[4778]: E0312 13:16:34.838048 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe\": container with ID starting with 2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe not found: ID does not exist" containerID="2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.838082 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe"} err="failed to get container status \"2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe\": rpc error: code = NotFound desc = could not find container \"2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe\": container with ID starting with 2cab64649829ce76f15124685e98299c4b83afa38015be406f773626bc1243fe not found: ID does not exist" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.838104 4778 scope.go:117] "RemoveContainer" containerID="f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.864911 4778 scope.go:117] "RemoveContainer" containerID="f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783" Mar 12 13:16:34 crc kubenswrapper[4778]: E0312 13:16:34.865470 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783\": container with ID starting with f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783 not found: ID does not exist" containerID="f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.865554 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783"} err="failed to get container status \"f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783\": rpc error: code = NotFound desc = could not find container \"f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783\": container with ID starting with f2e5031b27b99dfbb22f5b0690baf59159234c18ae5db2065cb1c9e1a7bfc783 not found: ID does not exist" Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.866554 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f558b8664-wcwww"] Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.872061 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f558b8664-wcwww"] Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.876253 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n"] Mar 12 13:16:34 crc kubenswrapper[4778]: I0312 13:16:34.880049 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7864ddbcd6-8t87n"] Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.698527 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6"] Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699294 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerName="extract-utilities" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699318 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerName="extract-utilities" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699343 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerName="extract-utilities" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699355 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerName="extract-utilities" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699372 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerName="extract-content" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699383 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerName="extract-content" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699398 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f2c27d-f47b-4bcb-81af-749dd8f6d053" containerName="controller-manager" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699410 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f2c27d-f47b-4bcb-81af-749dd8f6d053" containerName="controller-manager" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699429 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerName="extract-content" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699439 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerName="extract-content" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699457 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c792e81a-8273-49a7-be95-c8c19cd2785b" containerName="oc" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699467 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c792e81a-8273-49a7-be95-c8c19cd2785b" containerName="oc" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699482 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" containerName="route-controller-manager" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699493 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" containerName="route-controller-manager" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699509 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699519 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699535 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699545 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699566 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699579 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699601 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerName="extract-content" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699612 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerName="extract-content" Mar 12 13:16:35 crc kubenswrapper[4778]: E0312 13:16:35.699626 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerName="extract-utilities" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699637 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerName="extract-utilities" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699803 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f2c27d-f47b-4bcb-81af-749dd8f6d053" containerName="controller-manager" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699823 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c792e81a-8273-49a7-be95-c8c19cd2785b" containerName="oc" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699839 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3fb69e-dd4f-4787-a207-4fe25106f9e7" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699857 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" containerName="route-controller-manager" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699870 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27afe2a-3402-49f9-b985-45fe67e40d22" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.699884 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bef112-9bef-4ce2-abd8-054b4d671658" containerName="registry-server" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.700492 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.705799 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.705979 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.706270 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.706397 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.706447 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.706447 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.709810 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-888fq"] Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.710483 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.713140 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.713875 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.713912 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.713937 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.714259 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.714847 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.718677 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6"] Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.723079 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.727299 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-888fq"] Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764353 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-client-ca\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764400 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c3b72a7-abe5-4c24-b14b-04ae34b28816-serving-cert\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764425 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2rdp\" (UniqueName: \"kubernetes.io/projected/5c3b72a7-abe5-4c24-b14b-04ae34b28816-kube-api-access-n2rdp\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764443 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e76627c-6dee-4814-bc83-672b4350b105-serving-cert\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764522 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-config\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764585 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-proxy-ca-bundles\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764621 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9gbx\" (UniqueName: \"kubernetes.io/projected/6e76627c-6dee-4814-bc83-672b4350b105-kube-api-access-v9gbx\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764643 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c3b72a7-abe5-4c24-b14b-04ae34b28816-client-ca\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.764668 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3b72a7-abe5-4c24-b14b-04ae34b28816-config\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.865880 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-proxy-ca-bundles\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.865948 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9gbx\" (UniqueName: \"kubernetes.io/projected/6e76627c-6dee-4814-bc83-672b4350b105-kube-api-access-v9gbx\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.865987 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c3b72a7-abe5-4c24-b14b-04ae34b28816-client-ca\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.866018 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3b72a7-abe5-4c24-b14b-04ae34b28816-config\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.866076 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-client-ca\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.866111 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c3b72a7-abe5-4c24-b14b-04ae34b28816-serving-cert\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.866141 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2rdp\" (UniqueName: \"kubernetes.io/projected/5c3b72a7-abe5-4c24-b14b-04ae34b28816-kube-api-access-n2rdp\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.866163 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e76627c-6dee-4814-bc83-672b4350b105-serving-cert\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.866251 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-config\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.867235 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-client-ca\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.867235 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-proxy-ca-bundles\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.868114 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c3b72a7-abe5-4c24-b14b-04ae34b28816-client-ca\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.868286 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e76627c-6dee-4814-bc83-672b4350b105-config\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.869427 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c3b72a7-abe5-4c24-b14b-04ae34b28816-config\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.872361 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e76627c-6dee-4814-bc83-672b4350b105-serving-cert\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.873377 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c3b72a7-abe5-4c24-b14b-04ae34b28816-serving-cert\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.883298 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9gbx\" (UniqueName: \"kubernetes.io/projected/6e76627c-6dee-4814-bc83-672b4350b105-kube-api-access-v9gbx\") pod \"controller-manager-f8c4b6bf8-888fq\" (UID: \"6e76627c-6dee-4814-bc83-672b4350b105\") " pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:35 crc kubenswrapper[4778]: I0312 13:16:35.885956 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2rdp\" (UniqueName: \"kubernetes.io/projected/5c3b72a7-abe5-4c24-b14b-04ae34b28816-kube-api-access-n2rdp\") pod \"route-controller-manager-6d496c4846-wmjt6\" (UID: \"5c3b72a7-abe5-4c24-b14b-04ae34b28816\") " pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.032175 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.051630 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.260267 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ce725f-d022-4fe6-9fd8-d61f4bec2ad6" path="/var/lib/kubelet/pods/03ce725f-d022-4fe6-9fd8-d61f4bec2ad6/volumes" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.261071 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f2c27d-f47b-4bcb-81af-749dd8f6d053" path="/var/lib/kubelet/pods/81f2c27d-f47b-4bcb-81af-749dd8f6d053/volumes" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.464075 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6"] Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.522634 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f8c4b6bf8-888fq"] Mar 12 13:16:36 crc kubenswrapper[4778]: W0312 13:16:36.526686 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e76627c_6dee_4814_bc83_672b4350b105.slice/crio-eb191a09f1ca4d1870d02902ac25b6f70f145d331329db420860af9c43f6f227 WatchSource:0}: Error finding container eb191a09f1ca4d1870d02902ac25b6f70f145d331329db420860af9c43f6f227: Status 404 returned error can't find the container with id eb191a09f1ca4d1870d02902ac25b6f70f145d331329db420860af9c43f6f227 Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.831579 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" event={"ID":"6e76627c-6dee-4814-bc83-672b4350b105","Type":"ContainerStarted","Data":"bc6fea73d079c9b176968cc59a0eb92caf386273ff3cf70afe4c4b8e1092c6e2"} Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.831880 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" event={"ID":"6e76627c-6dee-4814-bc83-672b4350b105","Type":"ContainerStarted","Data":"eb191a09f1ca4d1870d02902ac25b6f70f145d331329db420860af9c43f6f227"} Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.831895 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.834121 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" event={"ID":"5c3b72a7-abe5-4c24-b14b-04ae34b28816","Type":"ContainerStarted","Data":"105ee0ee9503b4985a14e9c66aefc21057c6fad2e578ad711e30cd2d47d5bc36"} Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.834174 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" event={"ID":"5c3b72a7-abe5-4c24-b14b-04ae34b28816","Type":"ContainerStarted","Data":"7af6ed9ee3cdaae266963c648e0a3b0d6683dc449c0a4b975b488b7e3ca35367"} Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.834246 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.844724 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.855489 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f8c4b6bf8-888fq" podStartSLOduration=3.855470967 podStartE2EDuration="3.855470967s" podCreationTimestamp="2026-03-12 13:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:16:36.855152917 +0000 UTC m=+415.303848323" watchObservedRunningTime="2026-03-12 13:16:36.855470967 +0000 UTC m=+415.304166363" Mar 12 13:16:36 crc kubenswrapper[4778]: I0312 13:16:36.918219 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" podStartSLOduration=2.918177365 podStartE2EDuration="2.918177365s" podCreationTimestamp="2026-03-12 13:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:16:36.90298566 +0000 UTC m=+415.351681056" watchObservedRunningTime="2026-03-12 13:16:36.918177365 +0000 UTC m=+415.366872761" Mar 12 13:16:37 crc kubenswrapper[4778]: I0312 13:16:37.176507 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d496c4846-wmjt6" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.402757 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rt8dj"] Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.403760 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.424195 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rt8dj"] Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.600262 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.600313 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/279e578a-c571-4915-994d-588cf930abe6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.600421 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/279e578a-c571-4915-994d-588cf930abe6-registry-certificates\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.600520 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk88l\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-kube-api-access-nk88l\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.600576 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/279e578a-c571-4915-994d-588cf930abe6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.600677 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-bound-sa-token\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.600798 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/279e578a-c571-4915-994d-588cf930abe6-trusted-ca\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.600901 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-registry-tls\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.622434 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.702385 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/279e578a-c571-4915-994d-588cf930abe6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.702425 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-bound-sa-token\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.702448 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/279e578a-c571-4915-994d-588cf930abe6-trusted-ca\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.702473 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-registry-tls\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.702505 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/279e578a-c571-4915-994d-588cf930abe6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.702533 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/279e578a-c571-4915-994d-588cf930abe6-registry-certificates\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.702552 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk88l\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-kube-api-access-nk88l\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.703197 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/279e578a-c571-4915-994d-588cf930abe6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.703611 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/279e578a-c571-4915-994d-588cf930abe6-trusted-ca\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.703798 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/279e578a-c571-4915-994d-588cf930abe6-registry-certificates\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.708774 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-registry-tls\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.719886 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/279e578a-c571-4915-994d-588cf930abe6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.720911 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk88l\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-kube-api-access-nk88l\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:38 crc kubenswrapper[4778]: I0312 13:16:38.721492 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/279e578a-c571-4915-994d-588cf930abe6-bound-sa-token\") pod \"image-registry-66df7c8f76-rt8dj\" (UID: \"279e578a-c571-4915-994d-588cf930abe6\") " pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:39 crc kubenswrapper[4778]: I0312 13:16:39.051520 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:39 crc kubenswrapper[4778]: I0312 13:16:39.637818 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rt8dj"] Mar 12 13:16:39 crc kubenswrapper[4778]: W0312 13:16:39.644591 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod279e578a_c571_4915_994d_588cf930abe6.slice/crio-1ee4a31ad528857f01145fbf2df7d6e11f61b0f976cb3844e2d01a2e372d66c3 WatchSource:0}: Error finding container 1ee4a31ad528857f01145fbf2df7d6e11f61b0f976cb3844e2d01a2e372d66c3: Status 404 returned error can't find the container with id 1ee4a31ad528857f01145fbf2df7d6e11f61b0f976cb3844e2d01a2e372d66c3 Mar 12 13:16:39 crc kubenswrapper[4778]: I0312 13:16:39.850639 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" event={"ID":"279e578a-c571-4915-994d-588cf930abe6","Type":"ContainerStarted","Data":"4e507aaa9f1e3645b83dbe1661a940ee2905ea4f684ba9048e16d660a87d550b"} Mar 12 13:16:39 crc kubenswrapper[4778]: I0312 13:16:39.850686 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" event={"ID":"279e578a-c571-4915-994d-588cf930abe6","Type":"ContainerStarted","Data":"1ee4a31ad528857f01145fbf2df7d6e11f61b0f976cb3844e2d01a2e372d66c3"} Mar 12 13:16:39 crc kubenswrapper[4778]: I0312 13:16:39.851385 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:39 crc kubenswrapper[4778]: I0312 13:16:39.872246 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" podStartSLOduration=1.872223076 podStartE2EDuration="1.872223076s" podCreationTimestamp="2026-03-12 13:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:16:39.868028278 +0000 UTC m=+418.316723684" watchObservedRunningTime="2026-03-12 13:16:39.872223076 +0000 UTC m=+418.320918472" Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.557600 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.558289 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.558364 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.559133 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcabd48eda797c052967d086d455193bf30a1f05151385a52352d733c58148f7"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.559237 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://dcabd48eda797c052967d086d455193bf30a1f05151385a52352d733c58148f7" gracePeriod=600 Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.969719 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="dcabd48eda797c052967d086d455193bf30a1f05151385a52352d733c58148f7" exitCode=0 Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.970004 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"dcabd48eda797c052967d086d455193bf30a1f05151385a52352d733c58148f7"} Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.970031 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"e50690e6aff1fd408e6201d1eee1240e692ce04bc21873dbbe85a5f2d638d704"} Mar 12 13:16:58 crc kubenswrapper[4778]: I0312 13:16:58.970046 4778 scope.go:117] "RemoveContainer" containerID="14daba92184fca91c6930d5b3e821f88408e0fd40a7793f2d70f82df7c9444ce" Mar 12 13:16:59 crc kubenswrapper[4778]: I0312 13:16:59.059566 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rt8dj" Mar 12 13:16:59 crc kubenswrapper[4778]: I0312 13:16:59.125151 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxrx4"] Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.122024 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qx9d8"] Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.122680 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qx9d8" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" containerName="registry-server" containerID="cri-o://13189da41e0fb30fa7cca9718222038a2b578d40c4f21c5e350b74e753b85587" gracePeriod=30 Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.128427 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khr6h"] Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.128685 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khr6h" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerName="registry-server" containerID="cri-o://b352e6584b478e7228a408cc5d6c8b18473e75a0de7be819c32ae9b98a707a4e" gracePeriod=30 Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.133283 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wqm5"] Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.133511 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" podUID="24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" containerName="marketplace-operator" containerID="cri-o://013c13acbd136a9ae3c6c39b9470a59aa4ab705637939d6af761af9e92e81b9c" gracePeriod=30 Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.144668 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvmk8"] Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.145346 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.160675 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xksl"] Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.160997 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8xksl" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerName="registry-server" containerID="cri-o://1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592" gracePeriod=30 Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.172222 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvmk8"] Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.182327 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5s5vs"] Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.182616 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5s5vs" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" containerName="registry-server" containerID="cri-o://06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd" gracePeriod=30 Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.290837 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b062c23-5acd-430d-aa6c-24b48a725594-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.290911 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2m4b\" (UniqueName: \"kubernetes.io/projected/3b062c23-5acd-430d-aa6c-24b48a725594-kube-api-access-m2m4b\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.290945 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b062c23-5acd-430d-aa6c-24b48a725594-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.392472 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b062c23-5acd-430d-aa6c-24b48a725594-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.392854 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2m4b\" (UniqueName: \"kubernetes.io/projected/3b062c23-5acd-430d-aa6c-24b48a725594-kube-api-access-m2m4b\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.393231 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b062c23-5acd-430d-aa6c-24b48a725594-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.393636 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b062c23-5acd-430d-aa6c-24b48a725594-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.408340 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b062c23-5acd-430d-aa6c-24b48a725594-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.412890 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2m4b\" (UniqueName: \"kubernetes.io/projected/3b062c23-5acd-430d-aa6c-24b48a725594-kube-api-access-m2m4b\") pod \"marketplace-operator-79b997595-hvmk8\" (UID: \"3b062c23-5acd-430d-aa6c-24b48a725594\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.481388 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.701388 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:17:01 crc kubenswrapper[4778]: I0312 13:17:01.781323 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.806013 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-utilities\") pod \"de4557b4-7957-47a0-8c42-845be1fa0f32\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.806116 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-catalog-content\") pod \"de4557b4-7957-47a0-8c42-845be1fa0f32\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.806215 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kch8z\" (UniqueName: \"kubernetes.io/projected/de4557b4-7957-47a0-8c42-845be1fa0f32-kube-api-access-kch8z\") pod \"de4557b4-7957-47a0-8c42-845be1fa0f32\" (UID: \"de4557b4-7957-47a0-8c42-845be1fa0f32\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.807380 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-utilities" (OuterVolumeSpecName: "utilities") pod "de4557b4-7957-47a0-8c42-845be1fa0f32" (UID: "de4557b4-7957-47a0-8c42-845be1fa0f32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.814389 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4557b4-7957-47a0-8c42-845be1fa0f32-kube-api-access-kch8z" (OuterVolumeSpecName: "kube-api-access-kch8z") pod "de4557b4-7957-47a0-8c42-845be1fa0f32" (UID: "de4557b4-7957-47a0-8c42-845be1fa0f32"). InnerVolumeSpecName "kube-api-access-kch8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.835539 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de4557b4-7957-47a0-8c42-845be1fa0f32" (UID: "de4557b4-7957-47a0-8c42-845be1fa0f32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.907900 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-utilities\") pod \"f438f2a3-60c0-4554-a49b-030545f8139c\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.908071 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-catalog-content\") pod \"f438f2a3-60c0-4554-a49b-030545f8139c\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.908826 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-utilities" (OuterVolumeSpecName: "utilities") pod "f438f2a3-60c0-4554-a49b-030545f8139c" (UID: "f438f2a3-60c0-4554-a49b-030545f8139c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.912292 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f438f2a3-60c0-4554-a49b-030545f8139c-kube-api-access-mpfz2" (OuterVolumeSpecName: "kube-api-access-mpfz2") pod "f438f2a3-60c0-4554-a49b-030545f8139c" (UID: "f438f2a3-60c0-4554-a49b-030545f8139c"). InnerVolumeSpecName "kube-api-access-mpfz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.915808 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvmk8"] Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.908100 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpfz2\" (UniqueName: \"kubernetes.io/projected/f438f2a3-60c0-4554-a49b-030545f8139c-kube-api-access-mpfz2\") pod \"f438f2a3-60c0-4554-a49b-030545f8139c\" (UID: \"f438f2a3-60c0-4554-a49b-030545f8139c\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.917720 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.917735 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de4557b4-7957-47a0-8c42-845be1fa0f32-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.917745 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpfz2\" (UniqueName: \"kubernetes.io/projected/f438f2a3-60c0-4554-a49b-030545f8139c-kube-api-access-mpfz2\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.917754 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kch8z\" (UniqueName: \"kubernetes.io/projected/de4557b4-7957-47a0-8c42-845be1fa0f32-kube-api-access-kch8z\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.917762 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.998676 4778 generic.go:334] "Generic (PLEG): container finished" podID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerID="1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592" exitCode=0 Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.998748 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xksl" event={"ID":"de4557b4-7957-47a0-8c42-845be1fa0f32","Type":"ContainerDied","Data":"1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592"} Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.998780 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xksl" event={"ID":"de4557b4-7957-47a0-8c42-845be1fa0f32","Type":"ContainerDied","Data":"775a67dbf14a4aa00ee320f14ee688f2689c34e66ee23b796f0166af1618f55f"} Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.998800 4778 scope.go:117] "RemoveContainer" containerID="1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:01.998923 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xksl" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.010464 4778 generic.go:334] "Generic (PLEG): container finished" podID="f438f2a3-60c0-4554-a49b-030545f8139c" containerID="06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd" exitCode=0 Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.010535 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s5vs" event={"ID":"f438f2a3-60c0-4554-a49b-030545f8139c","Type":"ContainerDied","Data":"06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd"} Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.010564 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s5vs" event={"ID":"f438f2a3-60c0-4554-a49b-030545f8139c","Type":"ContainerDied","Data":"c5e7e785f566d6c012fb07b0778c4b6c15691ef04836f8607417e605e9c6feb5"} Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.010564 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5s5vs" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.013802 4778 generic.go:334] "Generic (PLEG): container finished" podID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerID="b352e6584b478e7228a408cc5d6c8b18473e75a0de7be819c32ae9b98a707a4e" exitCode=0 Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.013925 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khr6h" event={"ID":"1d185732-cd6b-44c6-b4db-ee9ade00c683","Type":"ContainerDied","Data":"b352e6584b478e7228a408cc5d6c8b18473e75a0de7be819c32ae9b98a707a4e"} Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.015230 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" event={"ID":"3b062c23-5acd-430d-aa6c-24b48a725594","Type":"ContainerStarted","Data":"fbcdce17a0d9bdb5efce73cec35011a42fcb30d115a6cbf1a834fe9c4ac95c31"} Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.017500 4778 generic.go:334] "Generic (PLEG): container finished" podID="24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" containerID="013c13acbd136a9ae3c6c39b9470a59aa4ab705637939d6af761af9e92e81b9c" exitCode=0 Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.017583 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" event={"ID":"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d","Type":"ContainerDied","Data":"013c13acbd136a9ae3c6c39b9470a59aa4ab705637939d6af761af9e92e81b9c"} Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.023327 4778 generic.go:334] "Generic (PLEG): container finished" podID="651601bd-18fe-4ca1-9c61-481ca568d022" containerID="13189da41e0fb30fa7cca9718222038a2b578d40c4f21c5e350b74e753b85587" exitCode=0 Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.023532 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx9d8" event={"ID":"651601bd-18fe-4ca1-9c61-481ca568d022","Type":"ContainerDied","Data":"13189da41e0fb30fa7cca9718222038a2b578d40c4f21c5e350b74e753b85587"} Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.042560 4778 scope.go:117] "RemoveContainer" containerID="718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.042741 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xksl"] Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.045098 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xksl"] Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.058593 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f438f2a3-60c0-4554-a49b-030545f8139c" (UID: "f438f2a3-60c0-4554-a49b-030545f8139c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.086019 4778 scope.go:117] "RemoveContainer" containerID="167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.107411 4778 scope.go:117] "RemoveContainer" containerID="1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592" Mar 12 13:17:02 crc kubenswrapper[4778]: E0312 13:17:02.107957 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592\": container with ID starting with 1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592 not found: ID does not exist" containerID="1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.107988 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592"} err="failed to get container status \"1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592\": rpc error: code = NotFound desc = could not find container \"1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592\": container with ID starting with 1dc3137ddc227e6024fccc0afbe6f1d93623b9e53c63a937c6719203e66ee592 not found: ID does not exist" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.108012 4778 scope.go:117] "RemoveContainer" containerID="718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba" Mar 12 13:17:02 crc kubenswrapper[4778]: E0312 13:17:02.108346 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba\": container with ID starting with 718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba not found: ID does not exist" containerID="718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.108371 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba"} err="failed to get container status \"718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba\": rpc error: code = NotFound desc = could not find container \"718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba\": container with ID starting with 718ec5d1f6755df76f8300b916ef0eb0663019d9610ddce44e4b950ef7dec3ba not found: ID does not exist" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.108384 4778 scope.go:117] "RemoveContainer" containerID="167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252" Mar 12 13:17:02 crc kubenswrapper[4778]: E0312 13:17:02.108634 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252\": container with ID starting with 167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252 not found: ID does not exist" containerID="167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.108653 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252"} err="failed to get container status \"167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252\": rpc error: code = NotFound desc = could not find container \"167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252\": container with ID starting with 167b98bcb75be92dcb64515712bdd5c31feb59c13d9a61d37d29e56c03f4a252 not found: ID does not exist" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.108667 4778 scope.go:117] "RemoveContainer" containerID="06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.119868 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f438f2a3-60c0-4554-a49b-030545f8139c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.122884 4778 scope.go:117] "RemoveContainer" containerID="ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.162702 4778 scope.go:117] "RemoveContainer" containerID="9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.186592 4778 scope.go:117] "RemoveContainer" containerID="06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd" Mar 12 13:17:02 crc kubenswrapper[4778]: E0312 13:17:02.186979 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd\": container with ID starting with 06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd not found: ID does not exist" containerID="06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.187017 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd"} err="failed to get container status \"06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd\": rpc error: code = NotFound desc = could not find container \"06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd\": container with ID starting with 06e3e529ea6d479f93a4c0f8dc62611d7db0ca000158fd5d08aba4b4784ec2bd not found: ID does not exist" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.187042 4778 scope.go:117] "RemoveContainer" containerID="ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354" Mar 12 13:17:02 crc kubenswrapper[4778]: E0312 13:17:02.187423 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354\": container with ID starting with ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354 not found: ID does not exist" containerID="ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.187438 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354"} err="failed to get container status \"ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354\": rpc error: code = NotFound desc = could not find container \"ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354\": container with ID starting with ac4be4ba9c0f65056e92751bc6e83a1871b4710d28a0b4f32b544fe6c70e1354 not found: ID does not exist" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.187450 4778 scope.go:117] "RemoveContainer" containerID="9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e" Mar 12 13:17:02 crc kubenswrapper[4778]: E0312 13:17:02.187776 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e\": container with ID starting with 9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e not found: ID does not exist" containerID="9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.187791 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e"} err="failed to get container status \"9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e\": rpc error: code = NotFound desc = could not find container \"9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e\": container with ID starting with 9727ee6f8e8c78a7a69962ee912839b2519b88f461321e8f43bb35e450713d1e not found: ID does not exist" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.260049 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" path="/var/lib/kubelet/pods/de4557b4-7957-47a0-8c42-845be1fa0f32/volumes" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.367209 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.382602 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5s5vs"] Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.382896 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.386595 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5s5vs"] Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.391520 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524100 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-utilities\") pod \"1d185732-cd6b-44c6-b4db-ee9ade00c683\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524550 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-operator-metrics\") pod \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524584 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6mx5\" (UniqueName: \"kubernetes.io/projected/651601bd-18fe-4ca1-9c61-481ca568d022-kube-api-access-n6mx5\") pod \"651601bd-18fe-4ca1-9c61-481ca568d022\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524617 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzdp9\" (UniqueName: \"kubernetes.io/projected/1d185732-cd6b-44c6-b4db-ee9ade00c683-kube-api-access-zzdp9\") pod \"1d185732-cd6b-44c6-b4db-ee9ade00c683\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524645 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-trusted-ca\") pod \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524663 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-catalog-content\") pod \"651601bd-18fe-4ca1-9c61-481ca568d022\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524702 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-utilities\") pod \"651601bd-18fe-4ca1-9c61-481ca568d022\" (UID: \"651601bd-18fe-4ca1-9c61-481ca568d022\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524733 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmb4r\" (UniqueName: \"kubernetes.io/projected/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-kube-api-access-pmb4r\") pod \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\" (UID: \"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.524760 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-catalog-content\") pod \"1d185732-cd6b-44c6-b4db-ee9ade00c683\" (UID: \"1d185732-cd6b-44c6-b4db-ee9ade00c683\") " Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.525214 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-utilities" (OuterVolumeSpecName: "utilities") pod "1d185732-cd6b-44c6-b4db-ee9ade00c683" (UID: "1d185732-cd6b-44c6-b4db-ee9ade00c683"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.526127 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" (UID: "24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.526688 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-utilities" (OuterVolumeSpecName: "utilities") pod "651601bd-18fe-4ca1-9c61-481ca568d022" (UID: "651601bd-18fe-4ca1-9c61-481ca568d022"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.527326 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651601bd-18fe-4ca1-9c61-481ca568d022-kube-api-access-n6mx5" (OuterVolumeSpecName: "kube-api-access-n6mx5") pod "651601bd-18fe-4ca1-9c61-481ca568d022" (UID: "651601bd-18fe-4ca1-9c61-481ca568d022"). InnerVolumeSpecName "kube-api-access-n6mx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.528616 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-kube-api-access-pmb4r" (OuterVolumeSpecName: "kube-api-access-pmb4r") pod "24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" (UID: "24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d"). InnerVolumeSpecName "kube-api-access-pmb4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.528661 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d185732-cd6b-44c6-b4db-ee9ade00c683-kube-api-access-zzdp9" (OuterVolumeSpecName: "kube-api-access-zzdp9") pod "1d185732-cd6b-44c6-b4db-ee9ade00c683" (UID: "1d185732-cd6b-44c6-b4db-ee9ade00c683"). InnerVolumeSpecName "kube-api-access-zzdp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.529444 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" (UID: "24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.587232 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "651601bd-18fe-4ca1-9c61-481ca568d022" (UID: "651601bd-18fe-4ca1-9c61-481ca568d022"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.599439 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d185732-cd6b-44c6-b4db-ee9ade00c683" (UID: "1d185732-cd6b-44c6-b4db-ee9ade00c683"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626142 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmb4r\" (UniqueName: \"kubernetes.io/projected/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-kube-api-access-pmb4r\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626176 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626197 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d185732-cd6b-44c6-b4db-ee9ade00c683-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626206 4778 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626216 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6mx5\" (UniqueName: \"kubernetes.io/projected/651601bd-18fe-4ca1-9c61-481ca568d022-kube-api-access-n6mx5\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626229 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzdp9\" (UniqueName: \"kubernetes.io/projected/1d185732-cd6b-44c6-b4db-ee9ade00c683-kube-api-access-zzdp9\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626241 4778 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626252 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:02 crc kubenswrapper[4778]: I0312 13:17:02.626259 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651601bd-18fe-4ca1-9c61-481ca568d022-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.031901 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qx9d8" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.031919 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qx9d8" event={"ID":"651601bd-18fe-4ca1-9c61-481ca568d022","Type":"ContainerDied","Data":"592ed663fa0a363547ba9675a7740b1982ac31820675fa1bc6b541164ee13dff"} Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.032072 4778 scope.go:117] "RemoveContainer" containerID="13189da41e0fb30fa7cca9718222038a2b578d40c4f21c5e350b74e753b85587" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.038819 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khr6h" event={"ID":"1d185732-cd6b-44c6-b4db-ee9ade00c683","Type":"ContainerDied","Data":"f4257f2b5ae0b8d1695cb20eed3d7af4ca3c14b5f906e52fd4e46f8237158ff5"} Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.039039 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khr6h" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.040765 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" event={"ID":"3b062c23-5acd-430d-aa6c-24b48a725594","Type":"ContainerStarted","Data":"d197f824483b8e30c9baf11e1ffc2173dbd46d1ed31a6528325bfce5da893206"} Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.040984 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.043438 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" event={"ID":"24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d","Type":"ContainerDied","Data":"9ea9ce91a5458d09f7e543bf678a01cfeb2e8462d6860a8c5523bea49359f807"} Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.043567 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2wqm5" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.044259 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.067614 4778 scope.go:117] "RemoveContainer" containerID="777dcb7d13b3c9f17ff760e883a8a2c8d277b3c6622f9924b38301e80f9b85e9" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.070057 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hvmk8" podStartSLOduration=2.070018221 podStartE2EDuration="2.070018221s" podCreationTimestamp="2026-03-12 13:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:17:03.061955474 +0000 UTC m=+441.510650900" watchObservedRunningTime="2026-03-12 13:17:03.070018221 +0000 UTC m=+441.518713617" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.102996 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wqm5"] Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.107491 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2wqm5"] Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.116785 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qx9d8"] Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.130634 4778 scope.go:117] "RemoveContainer" containerID="768c08538cc35f7dca92094b0ee56f8d00acc523e23bc32165393cb6d17f7cd2" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.132650 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qx9d8"] Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.135849 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khr6h"] Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.139090 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khr6h"] Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.145683 4778 scope.go:117] "RemoveContainer" containerID="b352e6584b478e7228a408cc5d6c8b18473e75a0de7be819c32ae9b98a707a4e" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.160817 4778 scope.go:117] "RemoveContainer" containerID="84fe3c954d7e0d1d6303467d2621bf3b31d896882603252deb19491a2fa354ed" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.176835 4778 scope.go:117] "RemoveContainer" containerID="05d961ad3b7bd74a33e24a693f2775dd8f5c4483b25df2fe323f0e88cb5ff934" Mar 12 13:17:03 crc kubenswrapper[4778]: I0312 13:17:03.192303 4778 scope.go:117] "RemoveContainer" containerID="013c13acbd136a9ae3c6c39b9470a59aa4ab705637939d6af761af9e92e81b9c" Mar 12 13:17:04 crc kubenswrapper[4778]: I0312 13:17:04.262073 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" path="/var/lib/kubelet/pods/1d185732-cd6b-44c6-b4db-ee9ade00c683/volumes" Mar 12 13:17:04 crc kubenswrapper[4778]: I0312 13:17:04.263117 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" path="/var/lib/kubelet/pods/24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d/volumes" Mar 12 13:17:04 crc kubenswrapper[4778]: I0312 13:17:04.263906 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" path="/var/lib/kubelet/pods/651601bd-18fe-4ca1-9c61-481ca568d022/volumes" Mar 12 13:17:04 crc kubenswrapper[4778]: I0312 13:17:04.265600 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" path="/var/lib/kubelet/pods/f438f2a3-60c0-4554-a49b-030545f8139c/volumes" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331211 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k57lm"] Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331503 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" containerName="extract-utilities" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331522 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" containerName="extract-utilities" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331540 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331552 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331568 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" containerName="extract-utilities" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331579 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" containerName="extract-utilities" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331595 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331606 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331619 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331630 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331643 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" containerName="extract-content" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331653 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" containerName="extract-content" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331670 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerName="extract-content" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331680 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerName="extract-content" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331694 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" containerName="marketplace-operator" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331704 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" containerName="marketplace-operator" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331721 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerName="extract-utilities" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331732 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerName="extract-utilities" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331746 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331757 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331772 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerName="extract-content" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331783 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerName="extract-content" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331803 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" containerName="extract-content" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331814 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" containerName="extract-content" Mar 12 13:17:05 crc kubenswrapper[4778]: E0312 13:17:05.331833 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerName="extract-utilities" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331844 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerName="extract-utilities" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.331988 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="651601bd-18fe-4ca1-9c61-481ca568d022" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.332008 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f4aaf5-c17b-4cd8-9284-6df37f1c2f2d" containerName="marketplace-operator" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.332027 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f438f2a3-60c0-4554-a49b-030545f8139c" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.332040 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d185732-cd6b-44c6-b4db-ee9ade00c683" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.332063 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4557b4-7957-47a0-8c42-845be1fa0f32" containerName="registry-server" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.335298 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.339984 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.349095 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k57lm"] Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.370473 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-utilities\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.370547 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqdwm\" (UniqueName: \"kubernetes.io/projected/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-kube-api-access-jqdwm\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.370661 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-catalog-content\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.471113 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-utilities\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.471227 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqdwm\" (UniqueName: \"kubernetes.io/projected/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-kube-api-access-jqdwm\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.471340 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-catalog-content\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.471843 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-utilities\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.472205 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-catalog-content\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.495517 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqdwm\" (UniqueName: \"kubernetes.io/projected/1d67fa18-822d-4685-a7a1-5b8b8c39c96a-kube-api-access-jqdwm\") pod \"redhat-marketplace-k57lm\" (UID: \"1d67fa18-822d-4685-a7a1-5b8b8c39c96a\") " pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.528375 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r99nz"] Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.529313 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.531564 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.548407 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r99nz"] Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.667595 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.672692 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mccx\" (UniqueName: \"kubernetes.io/projected/89b39891-5207-4289-807f-57d00acb2937-kube-api-access-7mccx\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.672762 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-utilities\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.672809 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-catalog-content\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.774087 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mccx\" (UniqueName: \"kubernetes.io/projected/89b39891-5207-4289-807f-57d00acb2937-kube-api-access-7mccx\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.774214 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-utilities\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.774264 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-catalog-content\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.774853 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-catalog-content\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.775158 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-utilities\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.800766 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mccx\" (UniqueName: \"kubernetes.io/projected/89b39891-5207-4289-807f-57d00acb2937-kube-api-access-7mccx\") pod \"redhat-operators-r99nz\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:05 crc kubenswrapper[4778]: I0312 13:17:05.847742 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:06 crc kubenswrapper[4778]: I0312 13:17:06.090467 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k57lm"] Mar 12 13:17:06 crc kubenswrapper[4778]: W0312 13:17:06.097299 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d67fa18_822d_4685_a7a1_5b8b8c39c96a.slice/crio-e915930a5cfb75a4b06093def3f48df6ba60a7c8483b7b284c420a1c0656b5df WatchSource:0}: Error finding container e915930a5cfb75a4b06093def3f48df6ba60a7c8483b7b284c420a1c0656b5df: Status 404 returned error can't find the container with id e915930a5cfb75a4b06093def3f48df6ba60a7c8483b7b284c420a1c0656b5df Mar 12 13:17:06 crc kubenswrapper[4778]: I0312 13:17:06.213931 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r99nz"] Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.083838 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k57lm" event={"ID":"1d67fa18-822d-4685-a7a1-5b8b8c39c96a","Type":"ContainerDied","Data":"b163b5c1861d00221f3bbf389e3409b5816c14f6a5e512a09b8553e7bfb2f484"} Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.083646 4778 generic.go:334] "Generic (PLEG): container finished" podID="1d67fa18-822d-4685-a7a1-5b8b8c39c96a" containerID="b163b5c1861d00221f3bbf389e3409b5816c14f6a5e512a09b8553e7bfb2f484" exitCode=0 Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.085293 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k57lm" event={"ID":"1d67fa18-822d-4685-a7a1-5b8b8c39c96a","Type":"ContainerStarted","Data":"e915930a5cfb75a4b06093def3f48df6ba60a7c8483b7b284c420a1c0656b5df"} Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.088910 4778 generic.go:334] "Generic (PLEG): container finished" podID="89b39891-5207-4289-807f-57d00acb2937" containerID="984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b" exitCode=0 Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.088943 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r99nz" event={"ID":"89b39891-5207-4289-807f-57d00acb2937","Type":"ContainerDied","Data":"984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b"} Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.088964 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r99nz" event={"ID":"89b39891-5207-4289-807f-57d00acb2937","Type":"ContainerStarted","Data":"d44d2bee1e2b4ddf99f45277d9dc014b3b21712ebf48e47cb48538e60ac5ff80"} Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.729663 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fhcbf"] Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.730805 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.733774 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.743964 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhcbf"] Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.898897 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-catalog-content\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.898951 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2v5x\" (UniqueName: \"kubernetes.io/projected/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-kube-api-access-s2v5x\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.899519 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-utilities\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.933789 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-scbxn"] Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.935089 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.937681 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 12 13:17:07 crc kubenswrapper[4778]: I0312 13:17:07.943053 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scbxn"] Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.000695 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-catalog-content\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.000746 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2v5x\" (UniqueName: \"kubernetes.io/projected/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-kube-api-access-s2v5x\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.000936 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-utilities\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.001163 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-catalog-content\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.001551 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-utilities\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.020504 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2v5x\" (UniqueName: \"kubernetes.io/projected/b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef-kube-api-access-s2v5x\") pod \"certified-operators-fhcbf\" (UID: \"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef\") " pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.063413 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.101831 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-catalog-content\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.102133 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxbz\" (UniqueName: \"kubernetes.io/projected/f2f91915-3841-4662-88e4-82a22df0b131-kube-api-access-xpxbz\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.102158 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-utilities\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.202943 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-catalog-content\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.203298 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxbz\" (UniqueName: \"kubernetes.io/projected/f2f91915-3841-4662-88e4-82a22df0b131-kube-api-access-xpxbz\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.203325 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-utilities\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.203548 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-catalog-content\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.203697 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-utilities\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.239585 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxbz\" (UniqueName: \"kubernetes.io/projected/f2f91915-3841-4662-88e4-82a22df0b131-kube-api-access-xpxbz\") pod \"community-operators-scbxn\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.248921 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.500891 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhcbf"] Mar 12 13:17:08 crc kubenswrapper[4778]: W0312 13:17:08.510998 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b1dff9_c32b_4a91_863c_10b5ea4bc4ef.slice/crio-b17915a7a54db4e4374c5e1c27b5ab3e9dfdfb65aa22034620c6839ba47335e8 WatchSource:0}: Error finding container b17915a7a54db4e4374c5e1c27b5ab3e9dfdfb65aa22034620c6839ba47335e8: Status 404 returned error can't find the container with id b17915a7a54db4e4374c5e1c27b5ab3e9dfdfb65aa22034620c6839ba47335e8 Mar 12 13:17:08 crc kubenswrapper[4778]: I0312 13:17:08.637581 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scbxn"] Mar 12 13:17:08 crc kubenswrapper[4778]: W0312 13:17:08.687898 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f91915_3841_4662_88e4_82a22df0b131.slice/crio-a19d957b7ce97a07ce6c0132cc4944c7bac635fad459f95aeb77803a9db2f905 WatchSource:0}: Error finding container a19d957b7ce97a07ce6c0132cc4944c7bac635fad459f95aeb77803a9db2f905: Status 404 returned error can't find the container with id a19d957b7ce97a07ce6c0132cc4944c7bac635fad459f95aeb77803a9db2f905 Mar 12 13:17:09 crc kubenswrapper[4778]: I0312 13:17:09.109325 4778 generic.go:334] "Generic (PLEG): container finished" podID="89b39891-5207-4289-807f-57d00acb2937" containerID="805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e" exitCode=0 Mar 12 13:17:09 crc kubenswrapper[4778]: I0312 13:17:09.109893 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r99nz" event={"ID":"89b39891-5207-4289-807f-57d00acb2937","Type":"ContainerDied","Data":"805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e"} Mar 12 13:17:09 crc kubenswrapper[4778]: I0312 13:17:09.115172 4778 generic.go:334] "Generic (PLEG): container finished" podID="b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef" containerID="cbc2b1582a7a0850032095cce0edc23160709ce6369efa26cb488729d09bf744" exitCode=0 Mar 12 13:17:09 crc kubenswrapper[4778]: I0312 13:17:09.115759 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhcbf" event={"ID":"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef","Type":"ContainerDied","Data":"cbc2b1582a7a0850032095cce0edc23160709ce6369efa26cb488729d09bf744"} Mar 12 13:17:09 crc kubenswrapper[4778]: I0312 13:17:09.115794 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhcbf" event={"ID":"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef","Type":"ContainerStarted","Data":"b17915a7a54db4e4374c5e1c27b5ab3e9dfdfb65aa22034620c6839ba47335e8"} Mar 12 13:17:09 crc kubenswrapper[4778]: I0312 13:17:09.117724 4778 generic.go:334] "Generic (PLEG): container finished" podID="f2f91915-3841-4662-88e4-82a22df0b131" containerID="10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62" exitCode=0 Mar 12 13:17:09 crc kubenswrapper[4778]: I0312 13:17:09.117751 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxn" event={"ID":"f2f91915-3841-4662-88e4-82a22df0b131","Type":"ContainerDied","Data":"10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62"} Mar 12 13:17:09 crc kubenswrapper[4778]: I0312 13:17:09.117776 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxn" event={"ID":"f2f91915-3841-4662-88e4-82a22df0b131","Type":"ContainerStarted","Data":"a19d957b7ce97a07ce6c0132cc4944c7bac635fad459f95aeb77803a9db2f905"} Mar 12 13:17:10 crc kubenswrapper[4778]: I0312 13:17:10.136839 4778 generic.go:334] "Generic (PLEG): container finished" podID="1d67fa18-822d-4685-a7a1-5b8b8c39c96a" containerID="a12f9320b89f1cff13da61cda5d0d3172add5ff7c7b42ca5c292f58ca0c519d2" exitCode=0 Mar 12 13:17:10 crc kubenswrapper[4778]: I0312 13:17:10.136995 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k57lm" event={"ID":"1d67fa18-822d-4685-a7a1-5b8b8c39c96a","Type":"ContainerDied","Data":"a12f9320b89f1cff13da61cda5d0d3172add5ff7c7b42ca5c292f58ca0c519d2"} Mar 12 13:17:10 crc kubenswrapper[4778]: I0312 13:17:10.141743 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxn" event={"ID":"f2f91915-3841-4662-88e4-82a22df0b131","Type":"ContainerStarted","Data":"814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e"} Mar 12 13:17:10 crc kubenswrapper[4778]: I0312 13:17:10.147173 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r99nz" event={"ID":"89b39891-5207-4289-807f-57d00acb2937","Type":"ContainerStarted","Data":"712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31"} Mar 12 13:17:10 crc kubenswrapper[4778]: I0312 13:17:10.196711 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r99nz" podStartSLOduration=2.715893827 podStartE2EDuration="5.196693017s" podCreationTimestamp="2026-03-12 13:17:05 +0000 UTC" firstStartedPulling="2026-03-12 13:17:07.090877234 +0000 UTC m=+445.539572630" lastFinishedPulling="2026-03-12 13:17:09.571676414 +0000 UTC m=+448.020371820" observedRunningTime="2026-03-12 13:17:10.178524391 +0000 UTC m=+448.627219787" watchObservedRunningTime="2026-03-12 13:17:10.196693017 +0000 UTC m=+448.645388433" Mar 12 13:17:11 crc kubenswrapper[4778]: I0312 13:17:11.154778 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k57lm" event={"ID":"1d67fa18-822d-4685-a7a1-5b8b8c39c96a","Type":"ContainerStarted","Data":"344d2b901b2303d0a19f1c46054188f6685e0524047e61f31c15e844234ba822"} Mar 12 13:17:11 crc kubenswrapper[4778]: I0312 13:17:11.157709 4778 generic.go:334] "Generic (PLEG): container finished" podID="b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef" containerID="4e89930cc55f42c1b9c607ed232d72bcdba978dafb0fafb2584a55b43f8b32a4" exitCode=0 Mar 12 13:17:11 crc kubenswrapper[4778]: I0312 13:17:11.157803 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhcbf" event={"ID":"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef","Type":"ContainerDied","Data":"4e89930cc55f42c1b9c607ed232d72bcdba978dafb0fafb2584a55b43f8b32a4"} Mar 12 13:17:11 crc kubenswrapper[4778]: I0312 13:17:11.160980 4778 generic.go:334] "Generic (PLEG): container finished" podID="f2f91915-3841-4662-88e4-82a22df0b131" containerID="814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e" exitCode=0 Mar 12 13:17:11 crc kubenswrapper[4778]: I0312 13:17:11.161538 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxn" event={"ID":"f2f91915-3841-4662-88e4-82a22df0b131","Type":"ContainerDied","Data":"814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e"} Mar 12 13:17:11 crc kubenswrapper[4778]: I0312 13:17:11.182397 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k57lm" podStartSLOduration=2.688240841 podStartE2EDuration="6.182379198s" podCreationTimestamp="2026-03-12 13:17:05 +0000 UTC" firstStartedPulling="2026-03-12 13:17:07.087509701 +0000 UTC m=+445.536205117" lastFinishedPulling="2026-03-12 13:17:10.581648078 +0000 UTC m=+449.030343474" observedRunningTime="2026-03-12 13:17:11.181479741 +0000 UTC m=+449.630175137" watchObservedRunningTime="2026-03-12 13:17:11.182379198 +0000 UTC m=+449.631074594" Mar 12 13:17:12 crc kubenswrapper[4778]: I0312 13:17:12.179710 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhcbf" event={"ID":"b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef","Type":"ContainerStarted","Data":"30cfba142743b15f9002d2a0931af09174d76a9d4edab7d4eecf93d5cb1c7403"} Mar 12 13:17:12 crc kubenswrapper[4778]: I0312 13:17:12.194453 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxn" event={"ID":"f2f91915-3841-4662-88e4-82a22df0b131","Type":"ContainerStarted","Data":"b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76"} Mar 12 13:17:12 crc kubenswrapper[4778]: I0312 13:17:12.221661 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fhcbf" podStartSLOduration=2.772273599 podStartE2EDuration="5.221641498s" podCreationTimestamp="2026-03-12 13:17:07 +0000 UTC" firstStartedPulling="2026-03-12 13:17:09.116589479 +0000 UTC m=+447.565284875" lastFinishedPulling="2026-03-12 13:17:11.565957378 +0000 UTC m=+450.014652774" observedRunningTime="2026-03-12 13:17:12.202162843 +0000 UTC m=+450.650858259" watchObservedRunningTime="2026-03-12 13:17:12.221641498 +0000 UTC m=+450.670336904" Mar 12 13:17:12 crc kubenswrapper[4778]: I0312 13:17:12.222620 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-scbxn" podStartSLOduration=2.779431728 podStartE2EDuration="5.222610168s" podCreationTimestamp="2026-03-12 13:17:07 +0000 UTC" firstStartedPulling="2026-03-12 13:17:09.119699824 +0000 UTC m=+447.568395220" lastFinishedPulling="2026-03-12 13:17:11.562878264 +0000 UTC m=+450.011573660" observedRunningTime="2026-03-12 13:17:12.219551814 +0000 UTC m=+450.668247210" watchObservedRunningTime="2026-03-12 13:17:12.222610168 +0000 UTC m=+450.671305584" Mar 12 13:17:15 crc kubenswrapper[4778]: I0312 13:17:15.668835 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:15 crc kubenswrapper[4778]: I0312 13:17:15.670739 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:15 crc kubenswrapper[4778]: I0312 13:17:15.715003 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:15 crc kubenswrapper[4778]: I0312 13:17:15.849062 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:15 crc kubenswrapper[4778]: I0312 13:17:15.849383 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:16 crc kubenswrapper[4778]: I0312 13:17:16.278977 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k57lm" Mar 12 13:17:16 crc kubenswrapper[4778]: I0312 13:17:16.895441 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r99nz" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="registry-server" probeResult="failure" output=< Mar 12 13:17:16 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:17:16 crc kubenswrapper[4778]: > Mar 12 13:17:18 crc kubenswrapper[4778]: I0312 13:17:18.064491 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:18 crc kubenswrapper[4778]: I0312 13:17:18.064545 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:18 crc kubenswrapper[4778]: I0312 13:17:18.119259 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:18 crc kubenswrapper[4778]: I0312 13:17:18.250252 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:18 crc kubenswrapper[4778]: I0312 13:17:18.251464 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:18 crc kubenswrapper[4778]: I0312 13:17:18.281474 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fhcbf" Mar 12 13:17:18 crc kubenswrapper[4778]: I0312 13:17:18.313139 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:19 crc kubenswrapper[4778]: I0312 13:17:19.295977 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.177704 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" podUID="51ee714f-fb23-4420-9e70-1b3134eea18e" containerName="registry" containerID="cri-o://29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111" gracePeriod=30 Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.586179 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.653090 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ee714f-fb23-4420-9e70-1b3134eea18e-ca-trust-extracted\") pod \"51ee714f-fb23-4420-9e70-1b3134eea18e\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.653209 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-bound-sa-token\") pod \"51ee714f-fb23-4420-9e70-1b3134eea18e\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.653242 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtgbd\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-kube-api-access-mtgbd\") pod \"51ee714f-fb23-4420-9e70-1b3134eea18e\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.653301 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ee714f-fb23-4420-9e70-1b3134eea18e-installation-pull-secrets\") pod \"51ee714f-fb23-4420-9e70-1b3134eea18e\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.653336 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-trusted-ca\") pod \"51ee714f-fb23-4420-9e70-1b3134eea18e\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.653378 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-tls\") pod \"51ee714f-fb23-4420-9e70-1b3134eea18e\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.653647 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"51ee714f-fb23-4420-9e70-1b3134eea18e\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.653698 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-certificates\") pod \"51ee714f-fb23-4420-9e70-1b3134eea18e\" (UID: \"51ee714f-fb23-4420-9e70-1b3134eea18e\") " Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.654219 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "51ee714f-fb23-4420-9e70-1b3134eea18e" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.654464 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "51ee714f-fb23-4420-9e70-1b3134eea18e" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.660279 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ee714f-fb23-4420-9e70-1b3134eea18e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "51ee714f-fb23-4420-9e70-1b3134eea18e" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.662364 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-kube-api-access-mtgbd" (OuterVolumeSpecName: "kube-api-access-mtgbd") pod "51ee714f-fb23-4420-9e70-1b3134eea18e" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e"). InnerVolumeSpecName "kube-api-access-mtgbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.663752 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "51ee714f-fb23-4420-9e70-1b3134eea18e" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.664214 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "51ee714f-fb23-4420-9e70-1b3134eea18e" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.667696 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "51ee714f-fb23-4420-9e70-1b3134eea18e" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.668774 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ee714f-fb23-4420-9e70-1b3134eea18e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "51ee714f-fb23-4420-9e70-1b3134eea18e" (UID: "51ee714f-fb23-4420-9e70-1b3134eea18e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.755725 4778 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.755780 4778 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51ee714f-fb23-4420-9e70-1b3134eea18e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.755796 4778 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.755805 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtgbd\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-kube-api-access-mtgbd\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.755818 4778 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51ee714f-fb23-4420-9e70-1b3134eea18e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.755829 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51ee714f-fb23-4420-9e70-1b3134eea18e-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:24 crc kubenswrapper[4778]: I0312 13:17:24.755839 4778 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51ee714f-fb23-4420-9e70-1b3134eea18e-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.272880 4778 generic.go:334] "Generic (PLEG): container finished" podID="51ee714f-fb23-4420-9e70-1b3134eea18e" containerID="29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111" exitCode=0 Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.272916 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" event={"ID":"51ee714f-fb23-4420-9e70-1b3134eea18e","Type":"ContainerDied","Data":"29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111"} Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.272947 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" event={"ID":"51ee714f-fb23-4420-9e70-1b3134eea18e","Type":"ContainerDied","Data":"9fdf9bc3368d582f75afb64ef1bd7b59c9e3cd5fe63a9b2265425474dba3a3b4"} Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.272967 4778 scope.go:117] "RemoveContainer" containerID="29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111" Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.272969 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fxrx4" Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.302806 4778 scope.go:117] "RemoveContainer" containerID="29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111" Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.305137 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxrx4"] Mar 12 13:17:25 crc kubenswrapper[4778]: E0312 13:17:25.306087 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111\": container with ID starting with 29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111 not found: ID does not exist" containerID="29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111" Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.306144 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111"} err="failed to get container status \"29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111\": rpc error: code = NotFound desc = could not find container \"29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111\": container with ID starting with 29df7c95c025412716ae854e04324e43fa3cc12e2e3e9061ce1a3a4518451111 not found: ID does not exist" Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.311238 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fxrx4"] Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.888062 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:25 crc kubenswrapper[4778]: I0312 13:17:25.928359 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 13:17:26 crc kubenswrapper[4778]: I0312 13:17:26.260238 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ee714f-fb23-4420-9e70-1b3134eea18e" path="/var/lib/kubelet/pods/51ee714f-fb23-4420-9e70-1b3134eea18e/volumes" Mar 12 13:17:32 crc kubenswrapper[4778]: I0312 13:17:32.355871 4778 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podf438f2a3-60c0-4554-a49b-030545f8139c"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podf438f2a3-60c0-4554-a49b-030545f8139c] : Timed out while waiting for systemd to remove kubepods-burstable-podf438f2a3_60c0_4554_a49b_030545f8139c.slice" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.132330 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555358-txmvp"] Mar 12 13:18:00 crc kubenswrapper[4778]: E0312 13:18:00.133044 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ee714f-fb23-4420-9e70-1b3134eea18e" containerName="registry" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.133059 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ee714f-fb23-4420-9e70-1b3134eea18e" containerName="registry" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.133206 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ee714f-fb23-4420-9e70-1b3134eea18e" containerName="registry" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.133649 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555358-txmvp" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.137619 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.137855 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.137866 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.142879 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555358-txmvp"] Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.282220 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrf5\" (UniqueName: \"kubernetes.io/projected/61c6485d-2d53-47d9-866a-31eb90ac254e-kube-api-access-mcrf5\") pod \"auto-csr-approver-29555358-txmvp\" (UID: \"61c6485d-2d53-47d9-866a-31eb90ac254e\") " pod="openshift-infra/auto-csr-approver-29555358-txmvp" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.384094 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrf5\" (UniqueName: \"kubernetes.io/projected/61c6485d-2d53-47d9-866a-31eb90ac254e-kube-api-access-mcrf5\") pod \"auto-csr-approver-29555358-txmvp\" (UID: \"61c6485d-2d53-47d9-866a-31eb90ac254e\") " pod="openshift-infra/auto-csr-approver-29555358-txmvp" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.405159 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrf5\" (UniqueName: \"kubernetes.io/projected/61c6485d-2d53-47d9-866a-31eb90ac254e-kube-api-access-mcrf5\") pod \"auto-csr-approver-29555358-txmvp\" (UID: \"61c6485d-2d53-47d9-866a-31eb90ac254e\") " pod="openshift-infra/auto-csr-approver-29555358-txmvp" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.452826 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555358-txmvp" Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.841493 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555358-txmvp"] Mar 12 13:18:00 crc kubenswrapper[4778]: I0312 13:18:00.852771 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:18:01 crc kubenswrapper[4778]: I0312 13:18:01.490046 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555358-txmvp" event={"ID":"61c6485d-2d53-47d9-866a-31eb90ac254e","Type":"ContainerStarted","Data":"4310735102e019aa75c378b665ecbac2727e33dd70292149fff77566559719ef"} Mar 12 13:18:03 crc kubenswrapper[4778]: I0312 13:18:03.507840 4778 generic.go:334] "Generic (PLEG): container finished" podID="61c6485d-2d53-47d9-866a-31eb90ac254e" containerID="42a7fef965fea72fd4ae8fcc7e99e6b821d3626af8cb88a527c7193c956003a6" exitCode=0 Mar 12 13:18:03 crc kubenswrapper[4778]: I0312 13:18:03.507904 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555358-txmvp" event={"ID":"61c6485d-2d53-47d9-866a-31eb90ac254e","Type":"ContainerDied","Data":"42a7fef965fea72fd4ae8fcc7e99e6b821d3626af8cb88a527c7193c956003a6"} Mar 12 13:18:04 crc kubenswrapper[4778]: I0312 13:18:04.828727 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555358-txmvp" Mar 12 13:18:04 crc kubenswrapper[4778]: I0312 13:18:04.936596 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcrf5\" (UniqueName: \"kubernetes.io/projected/61c6485d-2d53-47d9-866a-31eb90ac254e-kube-api-access-mcrf5\") pod \"61c6485d-2d53-47d9-866a-31eb90ac254e\" (UID: \"61c6485d-2d53-47d9-866a-31eb90ac254e\") " Mar 12 13:18:04 crc kubenswrapper[4778]: I0312 13:18:04.941530 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c6485d-2d53-47d9-866a-31eb90ac254e-kube-api-access-mcrf5" (OuterVolumeSpecName: "kube-api-access-mcrf5") pod "61c6485d-2d53-47d9-866a-31eb90ac254e" (UID: "61c6485d-2d53-47d9-866a-31eb90ac254e"). InnerVolumeSpecName "kube-api-access-mcrf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:18:05 crc kubenswrapper[4778]: I0312 13:18:05.037854 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcrf5\" (UniqueName: \"kubernetes.io/projected/61c6485d-2d53-47d9-866a-31eb90ac254e-kube-api-access-mcrf5\") on node \"crc\" DevicePath \"\"" Mar 12 13:18:05 crc kubenswrapper[4778]: I0312 13:18:05.524945 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555358-txmvp" event={"ID":"61c6485d-2d53-47d9-866a-31eb90ac254e","Type":"ContainerDied","Data":"4310735102e019aa75c378b665ecbac2727e33dd70292149fff77566559719ef"} Mar 12 13:18:05 crc kubenswrapper[4778]: I0312 13:18:05.524997 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4310735102e019aa75c378b665ecbac2727e33dd70292149fff77566559719ef" Mar 12 13:18:05 crc kubenswrapper[4778]: I0312 13:18:05.525013 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555358-txmvp" Mar 12 13:18:05 crc kubenswrapper[4778]: I0312 13:18:05.904377 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555352-q7fvr"] Mar 12 13:18:05 crc kubenswrapper[4778]: I0312 13:18:05.912418 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555352-q7fvr"] Mar 12 13:18:06 crc kubenswrapper[4778]: I0312 13:18:06.259582 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f210efd-2ac0-4b67-89c5-fcd9f52f6e01" path="/var/lib/kubelet/pods/9f210efd-2ac0-4b67-89c5-fcd9f52f6e01/volumes" Mar 12 13:18:58 crc kubenswrapper[4778]: I0312 13:18:58.557387 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:18:58 crc kubenswrapper[4778]: I0312 13:18:58.557984 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:19:28 crc kubenswrapper[4778]: I0312 13:19:28.557675 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:19:28 crc kubenswrapper[4778]: I0312 13:19:28.558263 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:19:46 crc kubenswrapper[4778]: I0312 13:19:46.568178 4778 scope.go:117] "RemoveContainer" containerID="5ab6ab1e87e3d9a4f7941a7ab56868950f541c7821fdd08fb7b7e95206f0cb25" Mar 12 13:19:58 crc kubenswrapper[4778]: I0312 13:19:58.558075 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:19:58 crc kubenswrapper[4778]: I0312 13:19:58.558521 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:19:58 crc kubenswrapper[4778]: I0312 13:19:58.558597 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:19:58 crc kubenswrapper[4778]: I0312 13:19:58.559449 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e50690e6aff1fd408e6201d1eee1240e692ce04bc21873dbbe85a5f2d638d704"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:19:58 crc kubenswrapper[4778]: I0312 13:19:58.559542 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://e50690e6aff1fd408e6201d1eee1240e692ce04bc21873dbbe85a5f2d638d704" gracePeriod=600 Mar 12 13:19:59 crc kubenswrapper[4778]: I0312 13:19:59.212926 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="e50690e6aff1fd408e6201d1eee1240e692ce04bc21873dbbe85a5f2d638d704" exitCode=0 Mar 12 13:19:59 crc kubenswrapper[4778]: I0312 13:19:59.213179 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"e50690e6aff1fd408e6201d1eee1240e692ce04bc21873dbbe85a5f2d638d704"} Mar 12 13:19:59 crc kubenswrapper[4778]: I0312 13:19:59.213256 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"dfcc37339849724c4aacca3262255dd43897a2284c2172380a90cc97f52e3a46"} Mar 12 13:19:59 crc kubenswrapper[4778]: I0312 13:19:59.213317 4778 scope.go:117] "RemoveContainer" containerID="dcabd48eda797c052967d086d455193bf30a1f05151385a52352d733c58148f7" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.142770 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555360-vwflx"] Mar 12 13:20:00 crc kubenswrapper[4778]: E0312 13:20:00.143031 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c6485d-2d53-47d9-866a-31eb90ac254e" containerName="oc" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.143046 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c6485d-2d53-47d9-866a-31eb90ac254e" containerName="oc" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.143173 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c6485d-2d53-47d9-866a-31eb90ac254e" containerName="oc" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.143648 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555360-vwflx" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.146701 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.147172 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.147465 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.153584 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555360-vwflx"] Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.228877 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzwxv\" (UniqueName: \"kubernetes.io/projected/4c617404-7840-495c-80da-593af33f77d6-kube-api-access-kzwxv\") pod \"auto-csr-approver-29555360-vwflx\" (UID: \"4c617404-7840-495c-80da-593af33f77d6\") " pod="openshift-infra/auto-csr-approver-29555360-vwflx" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.330203 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzwxv\" (UniqueName: \"kubernetes.io/projected/4c617404-7840-495c-80da-593af33f77d6-kube-api-access-kzwxv\") pod \"auto-csr-approver-29555360-vwflx\" (UID: \"4c617404-7840-495c-80da-593af33f77d6\") " pod="openshift-infra/auto-csr-approver-29555360-vwflx" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.354070 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzwxv\" (UniqueName: \"kubernetes.io/projected/4c617404-7840-495c-80da-593af33f77d6-kube-api-access-kzwxv\") pod \"auto-csr-approver-29555360-vwflx\" (UID: \"4c617404-7840-495c-80da-593af33f77d6\") " pod="openshift-infra/auto-csr-approver-29555360-vwflx" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.512695 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555360-vwflx" Mar 12 13:20:00 crc kubenswrapper[4778]: I0312 13:20:00.711454 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555360-vwflx"] Mar 12 13:20:01 crc kubenswrapper[4778]: I0312 13:20:01.229064 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555360-vwflx" event={"ID":"4c617404-7840-495c-80da-593af33f77d6","Type":"ContainerStarted","Data":"0351e8b2cc8d22e69dee8334ff2e34916d4f30011467286d86f3b0031f613676"} Mar 12 13:20:02 crc kubenswrapper[4778]: I0312 13:20:02.234920 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555360-vwflx" event={"ID":"4c617404-7840-495c-80da-593af33f77d6","Type":"ContainerStarted","Data":"97b3a747ac158c0518500113b5af025bff04e06faaee081df03d1a06860f190f"} Mar 12 13:20:02 crc kubenswrapper[4778]: I0312 13:20:02.250110 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555360-vwflx" podStartSLOduration=1.049887438 podStartE2EDuration="2.250087831s" podCreationTimestamp="2026-03-12 13:20:00 +0000 UTC" firstStartedPulling="2026-03-12 13:20:00.716846248 +0000 UTC m=+619.165541644" lastFinishedPulling="2026-03-12 13:20:01.917046601 +0000 UTC m=+620.365742037" observedRunningTime="2026-03-12 13:20:02.247946508 +0000 UTC m=+620.696641964" watchObservedRunningTime="2026-03-12 13:20:02.250087831 +0000 UTC m=+620.698783267" Mar 12 13:20:03 crc kubenswrapper[4778]: I0312 13:20:03.242445 4778 generic.go:334] "Generic (PLEG): container finished" podID="4c617404-7840-495c-80da-593af33f77d6" containerID="97b3a747ac158c0518500113b5af025bff04e06faaee081df03d1a06860f190f" exitCode=0 Mar 12 13:20:03 crc kubenswrapper[4778]: I0312 13:20:03.242542 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555360-vwflx" event={"ID":"4c617404-7840-495c-80da-593af33f77d6","Type":"ContainerDied","Data":"97b3a747ac158c0518500113b5af025bff04e06faaee081df03d1a06860f190f"} Mar 12 13:20:04 crc kubenswrapper[4778]: I0312 13:20:04.488747 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555360-vwflx" Mar 12 13:20:04 crc kubenswrapper[4778]: I0312 13:20:04.585034 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzwxv\" (UniqueName: \"kubernetes.io/projected/4c617404-7840-495c-80da-593af33f77d6-kube-api-access-kzwxv\") pod \"4c617404-7840-495c-80da-593af33f77d6\" (UID: \"4c617404-7840-495c-80da-593af33f77d6\") " Mar 12 13:20:04 crc kubenswrapper[4778]: I0312 13:20:04.591698 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c617404-7840-495c-80da-593af33f77d6-kube-api-access-kzwxv" (OuterVolumeSpecName: "kube-api-access-kzwxv") pod "4c617404-7840-495c-80da-593af33f77d6" (UID: "4c617404-7840-495c-80da-593af33f77d6"). InnerVolumeSpecName "kube-api-access-kzwxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:20:04 crc kubenswrapper[4778]: I0312 13:20:04.686364 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzwxv\" (UniqueName: \"kubernetes.io/projected/4c617404-7840-495c-80da-593af33f77d6-kube-api-access-kzwxv\") on node \"crc\" DevicePath \"\"" Mar 12 13:20:05 crc kubenswrapper[4778]: I0312 13:20:05.256695 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555360-vwflx" event={"ID":"4c617404-7840-495c-80da-593af33f77d6","Type":"ContainerDied","Data":"0351e8b2cc8d22e69dee8334ff2e34916d4f30011467286d86f3b0031f613676"} Mar 12 13:20:05 crc kubenswrapper[4778]: I0312 13:20:05.256958 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0351e8b2cc8d22e69dee8334ff2e34916d4f30011467286d86f3b0031f613676" Mar 12 13:20:05 crc kubenswrapper[4778]: I0312 13:20:05.256779 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555360-vwflx" Mar 12 13:20:05 crc kubenswrapper[4778]: I0312 13:20:05.306946 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555354-n6zvc"] Mar 12 13:20:05 crc kubenswrapper[4778]: I0312 13:20:05.311727 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555354-n6zvc"] Mar 12 13:20:06 crc kubenswrapper[4778]: I0312 13:20:06.263814 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91620d9-a95e-4e74-ab13-531d5e040b50" path="/var/lib/kubelet/pods/f91620d9-a95e-4e74-ab13-531d5e040b50/volumes" Mar 12 13:20:46 crc kubenswrapper[4778]: I0312 13:20:46.602740 4778 scope.go:117] "RemoveContainer" containerID="d6a4e00222817c0335bb85eb95073d869a129a695fed4bc12743392acf13e251" Mar 12 13:20:46 crc kubenswrapper[4778]: I0312 13:20:46.633549 4778 scope.go:117] "RemoveContainer" containerID="6a586e8ffe815ea410f687edd18208ce93300b26a8a15a7f7b6bd8396c76c788" Mar 12 13:20:46 crc kubenswrapper[4778]: I0312 13:20:46.664713 4778 scope.go:117] "RemoveContainer" containerID="1e77f31cb8ac97bbace99ce9835f811074e891b28dabf061e7039bfab7607d57" Mar 12 13:20:46 crc kubenswrapper[4778]: I0312 13:20:46.681801 4778 scope.go:117] "RemoveContainer" containerID="e6857324d1a49d08837ab795e083cf8ed33ad61f45f62f385bd7494ef38b2514" Mar 12 13:21:58 crc kubenswrapper[4778]: I0312 13:21:58.558485 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:21:58 crc kubenswrapper[4778]: I0312 13:21:58.559457 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.140486 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555362-hlj7f"] Mar 12 13:22:00 crc kubenswrapper[4778]: E0312 13:22:00.140799 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c617404-7840-495c-80da-593af33f77d6" containerName="oc" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.140815 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c617404-7840-495c-80da-593af33f77d6" containerName="oc" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.140917 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c617404-7840-495c-80da-593af33f77d6" containerName="oc" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.141467 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555362-hlj7f" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.144619 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.144806 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.147671 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555362-hlj7f"] Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.148557 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.211715 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqtjq\" (UniqueName: \"kubernetes.io/projected/9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b-kube-api-access-dqtjq\") pod \"auto-csr-approver-29555362-hlj7f\" (UID: \"9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b\") " pod="openshift-infra/auto-csr-approver-29555362-hlj7f" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.313667 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqtjq\" (UniqueName: \"kubernetes.io/projected/9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b-kube-api-access-dqtjq\") pod \"auto-csr-approver-29555362-hlj7f\" (UID: \"9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b\") " pod="openshift-infra/auto-csr-approver-29555362-hlj7f" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.336629 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqtjq\" (UniqueName: \"kubernetes.io/projected/9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b-kube-api-access-dqtjq\") pod \"auto-csr-approver-29555362-hlj7f\" (UID: \"9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b\") " pod="openshift-infra/auto-csr-approver-29555362-hlj7f" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.456862 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555362-hlj7f" Mar 12 13:22:00 crc kubenswrapper[4778]: I0312 13:22:00.886320 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555362-hlj7f"] Mar 12 13:22:00 crc kubenswrapper[4778]: W0312 13:22:00.900063 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9da11ea2_3173_4f25_8f0e_3ccc5a0ca18b.slice/crio-4cadda00557bd5590431db7bfe006b025eff40be2272cd2a46765ffa91761117 WatchSource:0}: Error finding container 4cadda00557bd5590431db7bfe006b025eff40be2272cd2a46765ffa91761117: Status 404 returned error can't find the container with id 4cadda00557bd5590431db7bfe006b025eff40be2272cd2a46765ffa91761117 Mar 12 13:22:01 crc kubenswrapper[4778]: I0312 13:22:01.315780 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555362-hlj7f" event={"ID":"9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b","Type":"ContainerStarted","Data":"4cadda00557bd5590431db7bfe006b025eff40be2272cd2a46765ffa91761117"} Mar 12 13:22:03 crc kubenswrapper[4778]: I0312 13:22:03.331411 4778 generic.go:334] "Generic (PLEG): container finished" podID="9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b" containerID="1c6932f83080c12204b2bc10f63ca97fbee0fb238358dc69be9a27d4fc46a8a5" exitCode=0 Mar 12 13:22:03 crc kubenswrapper[4778]: I0312 13:22:03.331489 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555362-hlj7f" event={"ID":"9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b","Type":"ContainerDied","Data":"1c6932f83080c12204b2bc10f63ca97fbee0fb238358dc69be9a27d4fc46a8a5"} Mar 12 13:22:04 crc kubenswrapper[4778]: I0312 13:22:04.591734 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555362-hlj7f" Mar 12 13:22:04 crc kubenswrapper[4778]: I0312 13:22:04.685038 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqtjq\" (UniqueName: \"kubernetes.io/projected/9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b-kube-api-access-dqtjq\") pod \"9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b\" (UID: \"9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b\") " Mar 12 13:22:04 crc kubenswrapper[4778]: I0312 13:22:04.692080 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b-kube-api-access-dqtjq" (OuterVolumeSpecName: "kube-api-access-dqtjq") pod "9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b" (UID: "9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b"). InnerVolumeSpecName "kube-api-access-dqtjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:22:04 crc kubenswrapper[4778]: I0312 13:22:04.786001 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqtjq\" (UniqueName: \"kubernetes.io/projected/9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b-kube-api-access-dqtjq\") on node \"crc\" DevicePath \"\"" Mar 12 13:22:05 crc kubenswrapper[4778]: I0312 13:22:05.342773 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555362-hlj7f" event={"ID":"9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b","Type":"ContainerDied","Data":"4cadda00557bd5590431db7bfe006b025eff40be2272cd2a46765ffa91761117"} Mar 12 13:22:05 crc kubenswrapper[4778]: I0312 13:22:05.342810 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555362-hlj7f" Mar 12 13:22:05 crc kubenswrapper[4778]: I0312 13:22:05.342821 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cadda00557bd5590431db7bfe006b025eff40be2272cd2a46765ffa91761117" Mar 12 13:22:05 crc kubenswrapper[4778]: I0312 13:22:05.647853 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555356-cdmcz"] Mar 12 13:22:05 crc kubenswrapper[4778]: I0312 13:22:05.651001 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555356-cdmcz"] Mar 12 13:22:06 crc kubenswrapper[4778]: I0312 13:22:06.266690 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c792e81a-8273-49a7-be95-c8c19cd2785b" path="/var/lib/kubelet/pods/c792e81a-8273-49a7-be95-c8c19cd2785b/volumes" Mar 12 13:22:28 crc kubenswrapper[4778]: I0312 13:22:28.557917 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:22:28 crc kubenswrapper[4778]: I0312 13:22:28.558412 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:22:46 crc kubenswrapper[4778]: I0312 13:22:46.755456 4778 scope.go:117] "RemoveContainer" containerID="b6d55e4553c4a90b5714d39c88d9e361c3f3109a89cdbda1980233a5b1fade38" Mar 12 13:22:58 crc kubenswrapper[4778]: I0312 13:22:58.558374 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:22:58 crc kubenswrapper[4778]: I0312 13:22:58.559247 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:22:58 crc kubenswrapper[4778]: I0312 13:22:58.559329 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:22:58 crc kubenswrapper[4778]: I0312 13:22:58.560062 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfcc37339849724c4aacca3262255dd43897a2284c2172380a90cc97f52e3a46"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:22:58 crc kubenswrapper[4778]: I0312 13:22:58.560151 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://dfcc37339849724c4aacca3262255dd43897a2284c2172380a90cc97f52e3a46" gracePeriod=600 Mar 12 13:22:59 crc kubenswrapper[4778]: I0312 13:22:59.689390 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="dfcc37339849724c4aacca3262255dd43897a2284c2172380a90cc97f52e3a46" exitCode=0 Mar 12 13:22:59 crc kubenswrapper[4778]: I0312 13:22:59.689464 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"dfcc37339849724c4aacca3262255dd43897a2284c2172380a90cc97f52e3a46"} Mar 12 13:22:59 crc kubenswrapper[4778]: I0312 13:22:59.689807 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"b65e287d42eea6146877a35b0789c26ac0ef9f5d251a760b59f08b3fef055d65"} Mar 12 13:22:59 crc kubenswrapper[4778]: I0312 13:22:59.689830 4778 scope.go:117] "RemoveContainer" containerID="e50690e6aff1fd408e6201d1eee1240e692ce04bc21873dbbe85a5f2d638d704" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.877216 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g"] Mar 12 13:23:04 crc kubenswrapper[4778]: E0312 13:23:04.877984 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b" containerName="oc" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.878001 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b" containerName="oc" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.878129 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b" containerName="oc" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.878586 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.881240 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.881686 4778 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-x4ptm" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.881995 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.887074 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2774s"] Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.887985 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2774s" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.889895 4778 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-r9jbf" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.908480 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2774s"] Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.913500 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g"] Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.933424 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ffh2x"] Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.935642 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.937666 4778 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6zmz6" Mar 12 13:23:04 crc kubenswrapper[4778]: I0312 13:23:04.942193 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ffh2x"] Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.032666 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkwl\" (UniqueName: \"kubernetes.io/projected/804d0b09-6fab-4277-936a-5e0324d76b3e-kube-api-access-xmkwl\") pod \"cert-manager-cainjector-cf98fcc89-jxs4g\" (UID: \"804d0b09-6fab-4277-936a-5e0324d76b3e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.032741 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-668fd\" (UniqueName: \"kubernetes.io/projected/45da07c5-bccb-4433-aa38-d9d2894f1b09-kube-api-access-668fd\") pod \"cert-manager-webhook-687f57d79b-ffh2x\" (UID: \"45da07c5-bccb-4433-aa38-d9d2894f1b09\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.032777 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52wz\" (UniqueName: \"kubernetes.io/projected/92b29110-f478-42b5-9a5f-c9330a3973b2-kube-api-access-m52wz\") pod \"cert-manager-858654f9db-2774s\" (UID: \"92b29110-f478-42b5-9a5f-c9330a3973b2\") " pod="cert-manager/cert-manager-858654f9db-2774s" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.134312 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkwl\" (UniqueName: \"kubernetes.io/projected/804d0b09-6fab-4277-936a-5e0324d76b3e-kube-api-access-xmkwl\") pod \"cert-manager-cainjector-cf98fcc89-jxs4g\" (UID: \"804d0b09-6fab-4277-936a-5e0324d76b3e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.134388 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-668fd\" (UniqueName: \"kubernetes.io/projected/45da07c5-bccb-4433-aa38-d9d2894f1b09-kube-api-access-668fd\") pod \"cert-manager-webhook-687f57d79b-ffh2x\" (UID: \"45da07c5-bccb-4433-aa38-d9d2894f1b09\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.134419 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m52wz\" (UniqueName: \"kubernetes.io/projected/92b29110-f478-42b5-9a5f-c9330a3973b2-kube-api-access-m52wz\") pod \"cert-manager-858654f9db-2774s\" (UID: \"92b29110-f478-42b5-9a5f-c9330a3973b2\") " pod="cert-manager/cert-manager-858654f9db-2774s" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.153157 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m52wz\" (UniqueName: \"kubernetes.io/projected/92b29110-f478-42b5-9a5f-c9330a3973b2-kube-api-access-m52wz\") pod \"cert-manager-858654f9db-2774s\" (UID: \"92b29110-f478-42b5-9a5f-c9330a3973b2\") " pod="cert-manager/cert-manager-858654f9db-2774s" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.153735 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkwl\" (UniqueName: \"kubernetes.io/projected/804d0b09-6fab-4277-936a-5e0324d76b3e-kube-api-access-xmkwl\") pod \"cert-manager-cainjector-cf98fcc89-jxs4g\" (UID: \"804d0b09-6fab-4277-936a-5e0324d76b3e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.158435 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-668fd\" (UniqueName: \"kubernetes.io/projected/45da07c5-bccb-4433-aa38-d9d2894f1b09-kube-api-access-668fd\") pod \"cert-manager-webhook-687f57d79b-ffh2x\" (UID: \"45da07c5-bccb-4433-aa38-d9d2894f1b09\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.191702 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.209546 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2774s" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.252442 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.431441 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g"] Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.444745 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.468303 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2774s"] Mar 12 13:23:05 crc kubenswrapper[4778]: W0312 13:23:05.472393 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92b29110_f478_42b5_9a5f_c9330a3973b2.slice/crio-7649f3a93dee22f65d3b47331535f4c516f53f9ee9e71de7aa8bd889a816a4b2 WatchSource:0}: Error finding container 7649f3a93dee22f65d3b47331535f4c516f53f9ee9e71de7aa8bd889a816a4b2: Status 404 returned error can't find the container with id 7649f3a93dee22f65d3b47331535f4c516f53f9ee9e71de7aa8bd889a816a4b2 Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.520824 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ffh2x"] Mar 12 13:23:05 crc kubenswrapper[4778]: W0312 13:23:05.532385 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45da07c5_bccb_4433_aa38_d9d2894f1b09.slice/crio-02763cee9bdd368e2df4e6001abd9f67d450f69519235734d269a72af5e041e2 WatchSource:0}: Error finding container 02763cee9bdd368e2df4e6001abd9f67d450f69519235734d269a72af5e041e2: Status 404 returned error can't find the container with id 02763cee9bdd368e2df4e6001abd9f67d450f69519235734d269a72af5e041e2 Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.724232 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2774s" event={"ID":"92b29110-f478-42b5-9a5f-c9330a3973b2","Type":"ContainerStarted","Data":"7649f3a93dee22f65d3b47331535f4c516f53f9ee9e71de7aa8bd889a816a4b2"} Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.725442 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" event={"ID":"45da07c5-bccb-4433-aa38-d9d2894f1b09","Type":"ContainerStarted","Data":"02763cee9bdd368e2df4e6001abd9f67d450f69519235734d269a72af5e041e2"} Mar 12 13:23:05 crc kubenswrapper[4778]: I0312 13:23:05.726834 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g" event={"ID":"804d0b09-6fab-4277-936a-5e0324d76b3e","Type":"ContainerStarted","Data":"10510c6ad52b31596235e39038f9df629faf2cb933dd3c897d53107b62a0addb"} Mar 12 13:23:12 crc kubenswrapper[4778]: I0312 13:23:12.779424 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2774s" event={"ID":"92b29110-f478-42b5-9a5f-c9330a3973b2","Type":"ContainerStarted","Data":"11efd69e962344516667a5b7b415e9dcfff2f837949701433b301278c93e8b43"} Mar 12 13:23:12 crc kubenswrapper[4778]: I0312 13:23:12.781956 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" event={"ID":"45da07c5-bccb-4433-aa38-d9d2894f1b09","Type":"ContainerStarted","Data":"d813356578c5e1f7bab8605e1f5ad4fb6a22081d657c6f914c75bb9d59e890ef"} Mar 12 13:23:12 crc kubenswrapper[4778]: I0312 13:23:12.782079 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" Mar 12 13:23:12 crc kubenswrapper[4778]: I0312 13:23:12.783728 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g" event={"ID":"804d0b09-6fab-4277-936a-5e0324d76b3e","Type":"ContainerStarted","Data":"725cb330b7b3d6799189ae3a295da601d5d574bfb6cffd99984951cfd40f6425"} Mar 12 13:23:12 crc kubenswrapper[4778]: I0312 13:23:12.798347 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2774s" podStartSLOduration=2.624663656 podStartE2EDuration="8.798309642s" podCreationTimestamp="2026-03-12 13:23:04 +0000 UTC" firstStartedPulling="2026-03-12 13:23:05.475227653 +0000 UTC m=+803.923923049" lastFinishedPulling="2026-03-12 13:23:11.648873639 +0000 UTC m=+810.097569035" observedRunningTime="2026-03-12 13:23:12.795417129 +0000 UTC m=+811.244112555" watchObservedRunningTime="2026-03-12 13:23:12.798309642 +0000 UTC m=+811.247005048" Mar 12 13:23:12 crc kubenswrapper[4778]: I0312 13:23:12.814540 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" podStartSLOduration=2.56655417 podStartE2EDuration="8.814501966s" podCreationTimestamp="2026-03-12 13:23:04 +0000 UTC" firstStartedPulling="2026-03-12 13:23:05.535444108 +0000 UTC m=+803.984139514" lastFinishedPulling="2026-03-12 13:23:11.783391914 +0000 UTC m=+810.232087310" observedRunningTime="2026-03-12 13:23:12.810839521 +0000 UTC m=+811.259534917" watchObservedRunningTime="2026-03-12 13:23:12.814501966 +0000 UTC m=+811.263197362" Mar 12 13:23:12 crc kubenswrapper[4778]: I0312 13:23:12.843791 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-jxs4g" podStartSLOduration=2.632979375 podStartE2EDuration="8.843772055s" podCreationTimestamp="2026-03-12 13:23:04 +0000 UTC" firstStartedPulling="2026-03-12 13:23:05.443829903 +0000 UTC m=+803.892525299" lastFinishedPulling="2026-03-12 13:23:11.654622583 +0000 UTC m=+810.103317979" observedRunningTime="2026-03-12 13:23:12.842630852 +0000 UTC m=+811.291326248" watchObservedRunningTime="2026-03-12 13:23:12.843772055 +0000 UTC m=+811.292467461" Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.732152 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8bcc9"] Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.733898 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovn-controller" containerID="cri-o://b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500" gracePeriod=30 Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.734122 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="northd" containerID="cri-o://2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483" gracePeriod=30 Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.734223 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e" gracePeriod=30 Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.734094 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="sbdb" containerID="cri-o://6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d" gracePeriod=30 Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.734273 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovn-acl-logging" containerID="cri-o://1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6" gracePeriod=30 Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.733990 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="nbdb" containerID="cri-o://78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa" gracePeriod=30 Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.734107 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kube-rbac-proxy-node" containerID="cri-o://8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4" gracePeriod=30 Mar 12 13:23:14 crc kubenswrapper[4778]: I0312 13:23:14.779349 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" containerID="cri-o://9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d" gracePeriod=30 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.549092 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/3.log" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.552704 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovn-acl-logging/0.log" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.553519 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovn-controller/0.log" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.554225 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.618989 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b8gxm"] Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619197 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kube-rbac-proxy-node" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619209 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kube-rbac-proxy-node" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619221 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kubecfg-setup" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619262 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kubecfg-setup" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619271 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovn-acl-logging" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619276 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovn-acl-logging" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619287 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="nbdb" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619292 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="nbdb" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619299 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619305 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619313 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619318 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619328 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="sbdb" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619334 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="sbdb" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619344 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovn-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619349 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovn-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619355 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619361 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619368 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619373 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619382 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="northd" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619388 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="northd" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619393 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619399 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: E0312 13:23:15.619408 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619414 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619500 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovn-acl-logging" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619512 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619520 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="nbdb" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619530 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kube-rbac-proxy-node" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619536 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="northd" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619543 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="sbdb" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619550 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="kube-rbac-proxy-ovn-metrics" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619558 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovn-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619565 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619571 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619577 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.619737 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerName="ovnkube-controller" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.621350 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680761 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-log-socket\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680794 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-netns\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680825 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-config\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680849 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-slash\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680874 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-kubelet\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680914 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-etc-openvswitch\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680933 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-ovn-kubernetes\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680947 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-ovn\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680978 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovn-node-metrics-cert\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.680998 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-systemd\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681011 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-node-log\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681029 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681048 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-script-lib\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681063 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-var-lib-openvswitch\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681081 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-openvswitch\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681100 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-bin\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681113 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-netd\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681130 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-env-overrides\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681150 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-schvw\" (UniqueName: \"kubernetes.io/projected/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-kube-api-access-schvw\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681166 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-systemd-units\") pod \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\" (UID: \"65cd795e-eb6e-4995-a4c1-9dea6f425ac5\") " Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681378 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681408 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-log-socket" (OuterVolumeSpecName: "log-socket") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681424 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681806 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681831 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-slash" (OuterVolumeSpecName: "host-slash") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681847 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681863 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681878 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.681895 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.682663 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.682701 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-node-log" (OuterVolumeSpecName: "node-log") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.682726 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.682754 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.682820 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.682850 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.683002 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.683307 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.687340 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-kube-api-access-schvw" (OuterVolumeSpecName: "kube-api-access-schvw") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "kube-api-access-schvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.688061 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.696728 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "65cd795e-eb6e-4995-a4c1-9dea6f425ac5" (UID: "65cd795e-eb6e-4995-a4c1-9dea6f425ac5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.782273 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-log-socket\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.783522 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovn-node-metrics-cert\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.783677 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-env-overrides\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.783803 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfssb\" (UniqueName: \"kubernetes.io/projected/1664bb45-1e97-4371-9dbc-5e27e778ee0b-kube-api-access-cfssb\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.783931 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-ovn\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.784040 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-cni-bin\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.784175 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.784294 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-cni-netd\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.784441 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovnkube-script-lib\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.784558 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-slash\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.784679 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-systemd-units\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.784814 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-var-lib-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.785092 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-run-netns\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.785232 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-systemd\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.785355 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovnkube-config\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.785561 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.785671 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-node-log\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.785794 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-etc-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.785891 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-run-ovn-kubernetes\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786019 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-kubelet\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786234 4778 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786323 4778 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786398 4778 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786471 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-schvw\" (UniqueName: \"kubernetes.io/projected/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-kube-api-access-schvw\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786544 4778 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786622 4778 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-log-socket\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786688 4778 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786752 4778 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786854 4778 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-slash\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786930 4778 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.786998 4778 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787073 4778 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787177 4778 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787270 4778 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787374 4778 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787439 4778 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-node-log\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787494 4778 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787554 4778 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787607 4778 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.787661 4778 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/65cd795e-eb6e-4995-a4c1-9dea6f425ac5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.804076 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fhcz6_1e7037a8-a966-4df0-9f94-fe2dd3e2de6e/kube-multus/1.log" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.804762 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fhcz6_1e7037a8-a966-4df0-9f94-fe2dd3e2de6e/kube-multus/0.log" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.804897 4778 generic.go:334] "Generic (PLEG): container finished" podID="1e7037a8-a966-4df0-9f94-fe2dd3e2de6e" containerID="44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e" exitCode=2 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.805040 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fhcz6" event={"ID":"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e","Type":"ContainerDied","Data":"44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.805212 4778 scope.go:117] "RemoveContainer" containerID="5da98f94c85e3a8cd05c447fb097a078968eea25419a2b22f8abe956ef1dbaac" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.805917 4778 scope.go:117] "RemoveContainer" containerID="44a3c76b2249ac9c24848e6b3a9fc08aef2d2bca3d170ce28b0f9384e3a8271e" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.811181 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovnkube-controller/3.log" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.815138 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovn-acl-logging/0.log" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.815776 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bcc9_65cd795e-eb6e-4995-a4c1-9dea6f425ac5/ovn-controller/0.log" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816227 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d" exitCode=0 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816251 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d" exitCode=0 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816260 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa" exitCode=0 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816269 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483" exitCode=0 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816277 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e" exitCode=0 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816285 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4" exitCode=0 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816293 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6" exitCode=143 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816254 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816380 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816300 4778 generic.go:334] "Generic (PLEG): container finished" podID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" containerID="b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500" exitCode=143 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816408 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816424 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816471 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816485 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816498 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816511 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816545 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816554 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816561 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816568 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816575 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816583 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816590 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816597 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816632 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816647 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816655 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816664 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816671 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816678 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816710 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816719 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816726 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816734 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816741 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816752 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816764 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816799 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816807 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816814 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816821 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816828 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816836 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816843 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816876 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816884 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816895 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" event={"ID":"65cd795e-eb6e-4995-a4c1-9dea6f425ac5","Type":"ContainerDied","Data":"591e87d9e47004fc9c6fc7b24484cec488177d8e0820b4787eb9618d9e5051df"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816907 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816915 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816923 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816954 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816964 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816971 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816979 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.816986 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.817028 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.817036 4778 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.818328 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8bcc9" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.863227 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8bcc9"] Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.869132 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8bcc9"] Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889412 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-var-lib-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889523 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-run-netns\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889553 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-systemd\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889573 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovnkube-config\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889600 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889605 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-var-lib-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889622 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-node-log\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889677 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-run-netns\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889713 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-systemd\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889691 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-node-log\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889713 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-etc-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889742 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-etc-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889767 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-run-ovn-kubernetes\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889794 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-run-ovn-kubernetes\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889770 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889846 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-kubelet\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889890 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-log-socket\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889916 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovn-node-metrics-cert\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889939 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-log-socket\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889945 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-env-overrides\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890008 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfssb\" (UniqueName: \"kubernetes.io/projected/1664bb45-1e97-4371-9dbc-5e27e778ee0b-kube-api-access-cfssb\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890041 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-ovn\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890073 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-cni-bin\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890122 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890159 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-cni-netd\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890226 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovnkube-script-lib\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890250 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-slash\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890272 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-systemd-units\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890369 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-systemd-units\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890509 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-env-overrides\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890697 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-ovn\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890732 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-cni-bin\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890760 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-run-openvswitch\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890789 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-cni-netd\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.890930 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-slash\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.889913 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1664bb45-1e97-4371-9dbc-5e27e778ee0b-host-kubelet\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.891455 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovnkube-script-lib\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.895748 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovn-node-metrics-cert\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.896539 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1664bb45-1e97-4371-9dbc-5e27e778ee0b-ovnkube-config\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.909170 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfssb\" (UniqueName: \"kubernetes.io/projected/1664bb45-1e97-4371-9dbc-5e27e778ee0b-kube-api-access-cfssb\") pod \"ovnkube-node-b8gxm\" (UID: \"1664bb45-1e97-4371-9dbc-5e27e778ee0b\") " pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.940100 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:15 crc kubenswrapper[4778]: W0312 13:23:15.957888 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1664bb45_1e97_4371_9dbc_5e27e778ee0b.slice/crio-01d8ef80533ea4fb98aabf7de84e5f17da4cb625b7e47d4badaaa85038ae83f5 WatchSource:0}: Error finding container 01d8ef80533ea4fb98aabf7de84e5f17da4cb625b7e47d4badaaa85038ae83f5: Status 404 returned error can't find the container with id 01d8ef80533ea4fb98aabf7de84e5f17da4cb625b7e47d4badaaa85038ae83f5 Mar 12 13:23:15 crc kubenswrapper[4778]: I0312 13:23:15.992896 4778 scope.go:117] "RemoveContainer" containerID="9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.010747 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.026394 4778 scope.go:117] "RemoveContainer" containerID="6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.045002 4778 scope.go:117] "RemoveContainer" containerID="78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.057847 4778 scope.go:117] "RemoveContainer" containerID="2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.070895 4778 scope.go:117] "RemoveContainer" containerID="1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.097976 4778 scope.go:117] "RemoveContainer" containerID="8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.111228 4778 scope.go:117] "RemoveContainer" containerID="1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.126507 4778 scope.go:117] "RemoveContainer" containerID="b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.138130 4778 scope.go:117] "RemoveContainer" containerID="ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.150704 4778 scope.go:117] "RemoveContainer" containerID="9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.151203 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": container with ID starting with 9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d not found: ID does not exist" containerID="9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.151250 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} err="failed to get container status \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": rpc error: code = NotFound desc = could not find container \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": container with ID starting with 9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.151273 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.151590 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": container with ID starting with 5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001 not found: ID does not exist" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.151638 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} err="failed to get container status \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": rpc error: code = NotFound desc = could not find container \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": container with ID starting with 5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.151674 4778 scope.go:117] "RemoveContainer" containerID="6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.152039 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": container with ID starting with 6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d not found: ID does not exist" containerID="6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.152073 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} err="failed to get container status \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": rpc error: code = NotFound desc = could not find container \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": container with ID starting with 6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.152092 4778 scope.go:117] "RemoveContainer" containerID="78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.152362 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": container with ID starting with 78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa not found: ID does not exist" containerID="78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.152388 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} err="failed to get container status \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": rpc error: code = NotFound desc = could not find container \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": container with ID starting with 78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.152405 4778 scope.go:117] "RemoveContainer" containerID="2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.152760 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": container with ID starting with 2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483 not found: ID does not exist" containerID="2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.152786 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} err="failed to get container status \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": rpc error: code = NotFound desc = could not find container \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": container with ID starting with 2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.152805 4778 scope.go:117] "RemoveContainer" containerID="1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.153065 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": container with ID starting with 1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e not found: ID does not exist" containerID="1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.153098 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} err="failed to get container status \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": rpc error: code = NotFound desc = could not find container \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": container with ID starting with 1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.153118 4778 scope.go:117] "RemoveContainer" containerID="8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.153364 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": container with ID starting with 8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4 not found: ID does not exist" containerID="8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.153387 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} err="failed to get container status \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": rpc error: code = NotFound desc = could not find container \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": container with ID starting with 8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.153405 4778 scope.go:117] "RemoveContainer" containerID="1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.153652 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": container with ID starting with 1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6 not found: ID does not exist" containerID="1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.153682 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} err="failed to get container status \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": rpc error: code = NotFound desc = could not find container \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": container with ID starting with 1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.153701 4778 scope.go:117] "RemoveContainer" containerID="b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.153958 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": container with ID starting with b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500 not found: ID does not exist" containerID="b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.153981 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} err="failed to get container status \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": rpc error: code = NotFound desc = could not find container \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": container with ID starting with b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.153995 4778 scope.go:117] "RemoveContainer" containerID="ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e" Mar 12 13:23:16 crc kubenswrapper[4778]: E0312 13:23:16.154229 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": container with ID starting with ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e not found: ID does not exist" containerID="ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.154254 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} err="failed to get container status \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": rpc error: code = NotFound desc = could not find container \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": container with ID starting with ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.154266 4778 scope.go:117] "RemoveContainer" containerID="9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.154665 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} err="failed to get container status \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": rpc error: code = NotFound desc = could not find container \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": container with ID starting with 9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.154691 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.154923 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} err="failed to get container status \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": rpc error: code = NotFound desc = could not find container \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": container with ID starting with 5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.154949 4778 scope.go:117] "RemoveContainer" containerID="6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.155195 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} err="failed to get container status \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": rpc error: code = NotFound desc = could not find container \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": container with ID starting with 6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.155221 4778 scope.go:117] "RemoveContainer" containerID="78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.155456 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} err="failed to get container status \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": rpc error: code = NotFound desc = could not find container \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": container with ID starting with 78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.155480 4778 scope.go:117] "RemoveContainer" containerID="2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.155787 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} err="failed to get container status \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": rpc error: code = NotFound desc = could not find container \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": container with ID starting with 2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.155807 4778 scope.go:117] "RemoveContainer" containerID="1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.156046 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} err="failed to get container status \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": rpc error: code = NotFound desc = could not find container \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": container with ID starting with 1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.156061 4778 scope.go:117] "RemoveContainer" containerID="8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.156306 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} err="failed to get container status \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": rpc error: code = NotFound desc = could not find container \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": container with ID starting with 8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.156327 4778 scope.go:117] "RemoveContainer" containerID="1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.156625 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} err="failed to get container status \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": rpc error: code = NotFound desc = could not find container \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": container with ID starting with 1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.156647 4778 scope.go:117] "RemoveContainer" containerID="b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.157320 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} err="failed to get container status \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": rpc error: code = NotFound desc = could not find container \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": container with ID starting with b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.157343 4778 scope.go:117] "RemoveContainer" containerID="ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.157680 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} err="failed to get container status \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": rpc error: code = NotFound desc = could not find container \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": container with ID starting with ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.157708 4778 scope.go:117] "RemoveContainer" containerID="9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.158045 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} err="failed to get container status \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": rpc error: code = NotFound desc = could not find container \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": container with ID starting with 9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.158066 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.158300 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} err="failed to get container status \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": rpc error: code = NotFound desc = could not find container \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": container with ID starting with 5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.158334 4778 scope.go:117] "RemoveContainer" containerID="6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.158663 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} err="failed to get container status \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": rpc error: code = NotFound desc = could not find container \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": container with ID starting with 6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.158695 4778 scope.go:117] "RemoveContainer" containerID="78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.158997 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} err="failed to get container status \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": rpc error: code = NotFound desc = could not find container \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": container with ID starting with 78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.159025 4778 scope.go:117] "RemoveContainer" containerID="2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.159272 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} err="failed to get container status \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": rpc error: code = NotFound desc = could not find container \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": container with ID starting with 2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.159296 4778 scope.go:117] "RemoveContainer" containerID="1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.159501 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} err="failed to get container status \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": rpc error: code = NotFound desc = could not find container \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": container with ID starting with 1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.159556 4778 scope.go:117] "RemoveContainer" containerID="8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.159767 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} err="failed to get container status \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": rpc error: code = NotFound desc = could not find container \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": container with ID starting with 8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.159787 4778 scope.go:117] "RemoveContainer" containerID="1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.159994 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} err="failed to get container status \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": rpc error: code = NotFound desc = could not find container \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": container with ID starting with 1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.160016 4778 scope.go:117] "RemoveContainer" containerID="b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.160428 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} err="failed to get container status \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": rpc error: code = NotFound desc = could not find container \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": container with ID starting with b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.160467 4778 scope.go:117] "RemoveContainer" containerID="ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.160782 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} err="failed to get container status \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": rpc error: code = NotFound desc = could not find container \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": container with ID starting with ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.160804 4778 scope.go:117] "RemoveContainer" containerID="9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.161120 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d"} err="failed to get container status \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": rpc error: code = NotFound desc = could not find container \"9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d\": container with ID starting with 9afb5c8d21c64a6b41dbded768a82ec790fb6f2f6a21efa119251504eb0c3a8d not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.161147 4778 scope.go:117] "RemoveContainer" containerID="5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.161470 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001"} err="failed to get container status \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": rpc error: code = NotFound desc = could not find container \"5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001\": container with ID starting with 5d6da6dba0e8cadf9b1073620c4856adeb6b776ae3757d420c016d25b4f98001 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.161493 4778 scope.go:117] "RemoveContainer" containerID="6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.161713 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d"} err="failed to get container status \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": rpc error: code = NotFound desc = could not find container \"6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d\": container with ID starting with 6bc4107e3fb5708a2acf2664ce876af4c682377a9b4ee393230b78d5b021552d not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.161733 4778 scope.go:117] "RemoveContainer" containerID="78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.162001 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa"} err="failed to get container status \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": rpc error: code = NotFound desc = could not find container \"78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa\": container with ID starting with 78da788cc1d96e866afcf18edff24c064972e022b1b4c3f5bec3175da5e989fa not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.162024 4778 scope.go:117] "RemoveContainer" containerID="2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.162323 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483"} err="failed to get container status \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": rpc error: code = NotFound desc = could not find container \"2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483\": container with ID starting with 2818aadc56c24df41309aa63fddf44dac870f041f206d59db4d8b8f88f728483 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.162340 4778 scope.go:117] "RemoveContainer" containerID="1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.162529 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e"} err="failed to get container status \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": rpc error: code = NotFound desc = could not find container \"1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e\": container with ID starting with 1325a4d5784843f5ef2dc629d3410d154ea07de5cd70cfae54d7111fe3e1ea3e not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.162551 4778 scope.go:117] "RemoveContainer" containerID="8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.162842 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4"} err="failed to get container status \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": rpc error: code = NotFound desc = could not find container \"8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4\": container with ID starting with 8f7250fa81a99a607a87f5f2fe4a85fcc2afbf7bfc12845bec4b5cbbed7784c4 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.162861 4778 scope.go:117] "RemoveContainer" containerID="1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.163240 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6"} err="failed to get container status \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": rpc error: code = NotFound desc = could not find container \"1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6\": container with ID starting with 1d8f2c481e6f3f8845b5e195a7fde8ae6415d25fb1701c25d93be7af1f4ef8f6 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.163311 4778 scope.go:117] "RemoveContainer" containerID="b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.163656 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500"} err="failed to get container status \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": rpc error: code = NotFound desc = could not find container \"b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500\": container with ID starting with b22f94192e4eee991a699c32f338a8d452d8fc0ce5dfa1197694a237697c4500 not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.163676 4778 scope.go:117] "RemoveContainer" containerID="ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.163943 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e"} err="failed to get container status \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": rpc error: code = NotFound desc = could not find container \"ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e\": container with ID starting with ce861cdc0bd23c8ce1b2989859dc05ceab2b6af01ac92029c419edc7f58c2b1e not found: ID does not exist" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.284465 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65cd795e-eb6e-4995-a4c1-9dea6f425ac5" path="/var/lib/kubelet/pods/65cd795e-eb6e-4995-a4c1-9dea6f425ac5/volumes" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.824754 4778 generic.go:334] "Generic (PLEG): container finished" podID="1664bb45-1e97-4371-9dbc-5e27e778ee0b" containerID="a1d264705e7133e0eb93e53519cdbe2e459c8f6e5a407d4657ed576e12d2b397" exitCode=0 Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.824857 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerDied","Data":"a1d264705e7133e0eb93e53519cdbe2e459c8f6e5a407d4657ed576e12d2b397"} Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.824917 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"01d8ef80533ea4fb98aabf7de84e5f17da4cb625b7e47d4badaaa85038ae83f5"} Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.829506 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fhcz6_1e7037a8-a966-4df0-9f94-fe2dd3e2de6e/kube-multus/1.log" Mar 12 13:23:16 crc kubenswrapper[4778]: I0312 13:23:16.829644 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fhcz6" event={"ID":"1e7037a8-a966-4df0-9f94-fe2dd3e2de6e","Type":"ContainerStarted","Data":"dbb67014f504889121924b5a7d01a6364d0717cc22842ad77b3827450b0bcf2c"} Mar 12 13:23:17 crc kubenswrapper[4778]: I0312 13:23:17.841157 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"9f6c54c39750a31fad04289db347d8fd7e01a78681b55a1b3b20c443d64600fc"} Mar 12 13:23:17 crc kubenswrapper[4778]: I0312 13:23:17.841457 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"8106201d2dce337561821161936bc1e2c5d1cb6f072033e304e2788a17f59091"} Mar 12 13:23:17 crc kubenswrapper[4778]: I0312 13:23:17.841469 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"02dbf475c36f92dbffe0ed598faf01abcbb066bee25f8b9b7158434cfc9cdcf9"} Mar 12 13:23:17 crc kubenswrapper[4778]: I0312 13:23:17.841478 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"d91aabc97edc08ee6b0fedb422db1f5f30b6a7c730554fa4fd2e70c02079cb28"} Mar 12 13:23:17 crc kubenswrapper[4778]: I0312 13:23:17.841486 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"3d68915fde1342954bdeedae820f9aca1d9e38fc407c53c550c6f5c1c2436f5a"} Mar 12 13:23:17 crc kubenswrapper[4778]: I0312 13:23:17.841494 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"c48ba3990b90a79802f5ed2b8c4baf9bdf992c1604fd9cb1292028cbb43d85ad"} Mar 12 13:23:20 crc kubenswrapper[4778]: I0312 13:23:20.260711 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-ffh2x" Mar 12 13:23:20 crc kubenswrapper[4778]: I0312 13:23:20.881037 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"4c7a2afa17744618a18e16336cd670680d0f5dc381f870cec07b0ad5d301079f"} Mar 12 13:23:22 crc kubenswrapper[4778]: I0312 13:23:22.899728 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" event={"ID":"1664bb45-1e97-4371-9dbc-5e27e778ee0b","Type":"ContainerStarted","Data":"41c943533a50230bf7606d72941dd077da20bbd6ef0f0343292050596ef7a8b5"} Mar 12 13:23:22 crc kubenswrapper[4778]: I0312 13:23:22.900334 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:22 crc kubenswrapper[4778]: I0312 13:23:22.900449 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:22 crc kubenswrapper[4778]: I0312 13:23:22.900519 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:22 crc kubenswrapper[4778]: I0312 13:23:22.936534 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" podStartSLOduration=7.9365080930000005 podStartE2EDuration="7.936508093s" podCreationTimestamp="2026-03-12 13:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:23:22.932870268 +0000 UTC m=+821.381565674" watchObservedRunningTime="2026-03-12 13:23:22.936508093 +0000 UTC m=+821.385203489" Mar 12 13:23:22 crc kubenswrapper[4778]: I0312 13:23:22.938281 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:22 crc kubenswrapper[4778]: I0312 13:23:22.946128 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:23:45 crc kubenswrapper[4778]: I0312 13:23:45.961801 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b8gxm" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.143359 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555364-hrrdv"] Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.145290 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555364-hrrdv" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.148959 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555364-hrrdv"] Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.149797 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.150827 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.151800 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.223525 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmmcq\" (UniqueName: \"kubernetes.io/projected/c862c78c-5987-48cc-8b41-531755f319e9-kube-api-access-rmmcq\") pod \"auto-csr-approver-29555364-hrrdv\" (UID: \"c862c78c-5987-48cc-8b41-531755f319e9\") " pod="openshift-infra/auto-csr-approver-29555364-hrrdv" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.325070 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmmcq\" (UniqueName: \"kubernetes.io/projected/c862c78c-5987-48cc-8b41-531755f319e9-kube-api-access-rmmcq\") pod \"auto-csr-approver-29555364-hrrdv\" (UID: \"c862c78c-5987-48cc-8b41-531755f319e9\") " pod="openshift-infra/auto-csr-approver-29555364-hrrdv" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.352117 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmmcq\" (UniqueName: \"kubernetes.io/projected/c862c78c-5987-48cc-8b41-531755f319e9-kube-api-access-rmmcq\") pod \"auto-csr-approver-29555364-hrrdv\" (UID: \"c862c78c-5987-48cc-8b41-531755f319e9\") " pod="openshift-infra/auto-csr-approver-29555364-hrrdv" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.472510 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555364-hrrdv" Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.657560 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555364-hrrdv"] Mar 12 13:24:00 crc kubenswrapper[4778]: I0312 13:24:00.662650 4778 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 12 13:24:01 crc kubenswrapper[4778]: I0312 13:24:01.121763 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555364-hrrdv" event={"ID":"c862c78c-5987-48cc-8b41-531755f319e9","Type":"ContainerStarted","Data":"45621410278cad64ed5807476fd5a76b1c6c29754bfb3cc45082b87d5cf1bb51"} Mar 12 13:24:03 crc kubenswrapper[4778]: I0312 13:24:03.140469 4778 generic.go:334] "Generic (PLEG): container finished" podID="c862c78c-5987-48cc-8b41-531755f319e9" containerID="a8f045f157371374b81f9a3098c61d715d2ce620fdfc3121b5f225672622998f" exitCode=0 Mar 12 13:24:03 crc kubenswrapper[4778]: I0312 13:24:03.140589 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555364-hrrdv" event={"ID":"c862c78c-5987-48cc-8b41-531755f319e9","Type":"ContainerDied","Data":"a8f045f157371374b81f9a3098c61d715d2ce620fdfc3121b5f225672622998f"} Mar 12 13:24:04 crc kubenswrapper[4778]: I0312 13:24:04.369883 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555364-hrrdv" Mar 12 13:24:04 crc kubenswrapper[4778]: I0312 13:24:04.472161 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmmcq\" (UniqueName: \"kubernetes.io/projected/c862c78c-5987-48cc-8b41-531755f319e9-kube-api-access-rmmcq\") pod \"c862c78c-5987-48cc-8b41-531755f319e9\" (UID: \"c862c78c-5987-48cc-8b41-531755f319e9\") " Mar 12 13:24:04 crc kubenswrapper[4778]: I0312 13:24:04.477144 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c862c78c-5987-48cc-8b41-531755f319e9-kube-api-access-rmmcq" (OuterVolumeSpecName: "kube-api-access-rmmcq") pod "c862c78c-5987-48cc-8b41-531755f319e9" (UID: "c862c78c-5987-48cc-8b41-531755f319e9"). InnerVolumeSpecName "kube-api-access-rmmcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:24:04 crc kubenswrapper[4778]: I0312 13:24:04.573647 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmmcq\" (UniqueName: \"kubernetes.io/projected/c862c78c-5987-48cc-8b41-531755f319e9-kube-api-access-rmmcq\") on node \"crc\" DevicePath \"\"" Mar 12 13:24:05 crc kubenswrapper[4778]: I0312 13:24:05.161242 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555364-hrrdv" event={"ID":"c862c78c-5987-48cc-8b41-531755f319e9","Type":"ContainerDied","Data":"45621410278cad64ed5807476fd5a76b1c6c29754bfb3cc45082b87d5cf1bb51"} Mar 12 13:24:05 crc kubenswrapper[4778]: I0312 13:24:05.161581 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45621410278cad64ed5807476fd5a76b1c6c29754bfb3cc45082b87d5cf1bb51" Mar 12 13:24:05 crc kubenswrapper[4778]: I0312 13:24:05.161711 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555364-hrrdv" Mar 12 13:24:05 crc kubenswrapper[4778]: I0312 13:24:05.440830 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555358-txmvp"] Mar 12 13:24:05 crc kubenswrapper[4778]: I0312 13:24:05.444214 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555358-txmvp"] Mar 12 13:24:06 crc kubenswrapper[4778]: I0312 13:24:06.261227 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c6485d-2d53-47d9-866a-31eb90ac254e" path="/var/lib/kubelet/pods/61c6485d-2d53-47d9-866a-31eb90ac254e/volumes" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.521627 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd"] Mar 12 13:24:14 crc kubenswrapper[4778]: E0312 13:24:14.522131 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c862c78c-5987-48cc-8b41-531755f319e9" containerName="oc" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.522146 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c862c78c-5987-48cc-8b41-531755f319e9" containerName="oc" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.522280 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c862c78c-5987-48cc-8b41-531755f319e9" containerName="oc" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.522976 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.526260 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.532283 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd"] Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.612720 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.612802 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.612885 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm5dc\" (UniqueName: \"kubernetes.io/projected/cb93062b-8387-4eb4-8662-ecaf93146d85-kube-api-access-gm5dc\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.714697 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.714773 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.714860 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm5dc\" (UniqueName: \"kubernetes.io/projected/cb93062b-8387-4eb4-8662-ecaf93146d85-kube-api-access-gm5dc\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.715799 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.715911 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.750950 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm5dc\" (UniqueName: \"kubernetes.io/projected/cb93062b-8387-4eb4-8662-ecaf93146d85-kube-api-access-gm5dc\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:14 crc kubenswrapper[4778]: I0312 13:24:14.840305 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:15 crc kubenswrapper[4778]: I0312 13:24:15.340616 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd"] Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.222641 4778 generic.go:334] "Generic (PLEG): container finished" podID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerID="aeb9887345bc844496df7d0c5b6bf02eb4f105d1e813edce48f3ff227e3b96e2" exitCode=0 Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.222701 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" event={"ID":"cb93062b-8387-4eb4-8662-ecaf93146d85","Type":"ContainerDied","Data":"aeb9887345bc844496df7d0c5b6bf02eb4f105d1e813edce48f3ff227e3b96e2"} Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.222912 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" event={"ID":"cb93062b-8387-4eb4-8662-ecaf93146d85","Type":"ContainerStarted","Data":"3f052fe9978b090dcb12eafc251984a2e5ac0055088087a2227d82364c78a56d"} Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.749424 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tpgqw"] Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.750727 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.769271 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tpgqw"] Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.845791 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-catalog-content\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.845849 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99g7g\" (UniqueName: \"kubernetes.io/projected/b4c13351-b8fa-4224-a09b-942200d398b1-kube-api-access-99g7g\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.845890 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-utilities\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.946981 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-catalog-content\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.947026 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99g7g\" (UniqueName: \"kubernetes.io/projected/b4c13351-b8fa-4224-a09b-942200d398b1-kube-api-access-99g7g\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.947052 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-utilities\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.947531 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-utilities\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.947607 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-catalog-content\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:16 crc kubenswrapper[4778]: I0312 13:24:16.965068 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99g7g\" (UniqueName: \"kubernetes.io/projected/b4c13351-b8fa-4224-a09b-942200d398b1-kube-api-access-99g7g\") pod \"redhat-operators-tpgqw\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:17 crc kubenswrapper[4778]: I0312 13:24:17.072150 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:17 crc kubenswrapper[4778]: I0312 13:24:17.473882 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tpgqw"] Mar 12 13:24:17 crc kubenswrapper[4778]: W0312 13:24:17.481321 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4c13351_b8fa_4224_a09b_942200d398b1.slice/crio-18bee62b19e956e2c3d75695d30248e10ca9db095c911e5857cfc9283eb0b145 WatchSource:0}: Error finding container 18bee62b19e956e2c3d75695d30248e10ca9db095c911e5857cfc9283eb0b145: Status 404 returned error can't find the container with id 18bee62b19e956e2c3d75695d30248e10ca9db095c911e5857cfc9283eb0b145 Mar 12 13:24:18 crc kubenswrapper[4778]: I0312 13:24:18.235134 4778 generic.go:334] "Generic (PLEG): container finished" podID="b4c13351-b8fa-4224-a09b-942200d398b1" containerID="24fd7a17a52a8b61e1c1382d3f50546a92de510fde86d6489d611167d57ee7a2" exitCode=0 Mar 12 13:24:18 crc kubenswrapper[4778]: I0312 13:24:18.235218 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgqw" event={"ID":"b4c13351-b8fa-4224-a09b-942200d398b1","Type":"ContainerDied","Data":"24fd7a17a52a8b61e1c1382d3f50546a92de510fde86d6489d611167d57ee7a2"} Mar 12 13:24:18 crc kubenswrapper[4778]: I0312 13:24:18.235630 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgqw" event={"ID":"b4c13351-b8fa-4224-a09b-942200d398b1","Type":"ContainerStarted","Data":"18bee62b19e956e2c3d75695d30248e10ca9db095c911e5857cfc9283eb0b145"} Mar 12 13:24:18 crc kubenswrapper[4778]: I0312 13:24:18.237774 4778 generic.go:334] "Generic (PLEG): container finished" podID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerID="bcf5a20a1141fa74f4f4862777ae90c4f7c67228670a44fa61079af3b9275916" exitCode=0 Mar 12 13:24:18 crc kubenswrapper[4778]: I0312 13:24:18.237851 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" event={"ID":"cb93062b-8387-4eb4-8662-ecaf93146d85","Type":"ContainerDied","Data":"bcf5a20a1141fa74f4f4862777ae90c4f7c67228670a44fa61079af3b9275916"} Mar 12 13:24:19 crc kubenswrapper[4778]: I0312 13:24:19.244569 4778 generic.go:334] "Generic (PLEG): container finished" podID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerID="fa7bdea962d6a0545c6439de4df98f392b5c38e2f8c4021c7527609f3773d2fc" exitCode=0 Mar 12 13:24:19 crc kubenswrapper[4778]: I0312 13:24:19.244655 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" event={"ID":"cb93062b-8387-4eb4-8662-ecaf93146d85","Type":"ContainerDied","Data":"fa7bdea962d6a0545c6439de4df98f392b5c38e2f8c4021c7527609f3773d2fc"} Mar 12 13:24:19 crc kubenswrapper[4778]: I0312 13:24:19.246429 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgqw" event={"ID":"b4c13351-b8fa-4224-a09b-942200d398b1","Type":"ContainerStarted","Data":"43f23e51ea1a4786666d9356631a20c154ceff599adf74c4b7fdae769a58165f"} Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.018757 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.092866 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-bundle\") pod \"cb93062b-8387-4eb4-8662-ecaf93146d85\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.093011 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm5dc\" (UniqueName: \"kubernetes.io/projected/cb93062b-8387-4eb4-8662-ecaf93146d85-kube-api-access-gm5dc\") pod \"cb93062b-8387-4eb4-8662-ecaf93146d85\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.093058 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-util\") pod \"cb93062b-8387-4eb4-8662-ecaf93146d85\" (UID: \"cb93062b-8387-4eb4-8662-ecaf93146d85\") " Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.097303 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-bundle" (OuterVolumeSpecName: "bundle") pod "cb93062b-8387-4eb4-8662-ecaf93146d85" (UID: "cb93062b-8387-4eb4-8662-ecaf93146d85"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.104810 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-util" (OuterVolumeSpecName: "util") pod "cb93062b-8387-4eb4-8662-ecaf93146d85" (UID: "cb93062b-8387-4eb4-8662-ecaf93146d85"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.108201 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb93062b-8387-4eb4-8662-ecaf93146d85-kube-api-access-gm5dc" (OuterVolumeSpecName: "kube-api-access-gm5dc") pod "cb93062b-8387-4eb4-8662-ecaf93146d85" (UID: "cb93062b-8387-4eb4-8662-ecaf93146d85"). InnerVolumeSpecName "kube-api-access-gm5dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.195146 4778 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-util\") on node \"crc\" DevicePath \"\"" Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.195289 4778 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb93062b-8387-4eb4-8662-ecaf93146d85-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.195313 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm5dc\" (UniqueName: \"kubernetes.io/projected/cb93062b-8387-4eb4-8662-ecaf93146d85-kube-api-access-gm5dc\") on node \"crc\" DevicePath \"\"" Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.395970 4778 generic.go:334] "Generic (PLEG): container finished" podID="b4c13351-b8fa-4224-a09b-942200d398b1" containerID="43f23e51ea1a4786666d9356631a20c154ceff599adf74c4b7fdae769a58165f" exitCode=0 Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.396031 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgqw" event={"ID":"b4c13351-b8fa-4224-a09b-942200d398b1","Type":"ContainerDied","Data":"43f23e51ea1a4786666d9356631a20c154ceff599adf74c4b7fdae769a58165f"} Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.401476 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" event={"ID":"cb93062b-8387-4eb4-8662-ecaf93146d85","Type":"ContainerDied","Data":"3f052fe9978b090dcb12eafc251984a2e5ac0055088087a2227d82364c78a56d"} Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.401819 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f052fe9978b090dcb12eafc251984a2e5ac0055088087a2227d82364c78a56d" Mar 12 13:24:21 crc kubenswrapper[4778]: I0312 13:24:21.401619 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd" Mar 12 13:24:23 crc kubenswrapper[4778]: I0312 13:24:23.415017 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgqw" event={"ID":"b4c13351-b8fa-4224-a09b-942200d398b1","Type":"ContainerStarted","Data":"4afc7793b8453c98e2325cef91c55591417e0eb2c857e8d9e7956e6b80421763"} Mar 12 13:24:23 crc kubenswrapper[4778]: I0312 13:24:23.433670 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tpgqw" podStartSLOduration=3.618079144 podStartE2EDuration="7.433644474s" podCreationTimestamp="2026-03-12 13:24:16 +0000 UTC" firstStartedPulling="2026-03-12 13:24:18.237156929 +0000 UTC m=+876.685852325" lastFinishedPulling="2026-03-12 13:24:22.052722259 +0000 UTC m=+880.501417655" observedRunningTime="2026-03-12 13:24:23.43070257 +0000 UTC m=+881.879397966" watchObservedRunningTime="2026-03-12 13:24:23.433644474 +0000 UTC m=+881.882339870" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.012503 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6"] Mar 12 13:24:25 crc kubenswrapper[4778]: E0312 13:24:25.012998 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerName="pull" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.013014 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerName="pull" Mar 12 13:24:25 crc kubenswrapper[4778]: E0312 13:24:25.013028 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerName="extract" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.013034 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerName="extract" Mar 12 13:24:25 crc kubenswrapper[4778]: E0312 13:24:25.013044 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerName="util" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.013051 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerName="util" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.013186 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb93062b-8387-4eb4-8662-ecaf93146d85" containerName="extract" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.013654 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.016418 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mzbl4" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.017611 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.017872 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.026068 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6"] Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.123731 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4rwl\" (UniqueName: \"kubernetes.io/projected/fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3-kube-api-access-q4rwl\") pod \"nmstate-operator-796d4cfff4-hxzd6\" (UID: \"fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.224770 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4rwl\" (UniqueName: \"kubernetes.io/projected/fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3-kube-api-access-q4rwl\") pod \"nmstate-operator-796d4cfff4-hxzd6\" (UID: \"fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.245299 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4rwl\" (UniqueName: \"kubernetes.io/projected/fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3-kube-api-access-q4rwl\") pod \"nmstate-operator-796d4cfff4-hxzd6\" (UID: \"fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.333699 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6" Mar 12 13:24:25 crc kubenswrapper[4778]: I0312 13:24:25.935964 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6"] Mar 12 13:24:26 crc kubenswrapper[4778]: I0312 13:24:26.431520 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6" event={"ID":"fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3","Type":"ContainerStarted","Data":"de7b463aeceaf4376c9af28c526743bec9c020efd46470d4686f4a4e81971bd2"} Mar 12 13:24:27 crc kubenswrapper[4778]: I0312 13:24:27.072686 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:27 crc kubenswrapper[4778]: I0312 13:24:27.072738 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:28 crc kubenswrapper[4778]: I0312 13:24:28.108847 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tpgqw" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="registry-server" probeResult="failure" output=< Mar 12 13:24:28 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:24:28 crc kubenswrapper[4778]: > Mar 12 13:24:32 crc kubenswrapper[4778]: I0312 13:24:32.466191 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6" event={"ID":"fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3","Type":"ContainerStarted","Data":"ebf1c0e7ad87a1fe1a79167522049e1d248c464828ae36b67d71839bfdf46854"} Mar 12 13:24:32 crc kubenswrapper[4778]: I0312 13:24:32.484249 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-hxzd6" podStartSLOduration=3.105399124 podStartE2EDuration="8.484216447s" podCreationTimestamp="2026-03-12 13:24:24 +0000 UTC" firstStartedPulling="2026-03-12 13:24:25.954441143 +0000 UTC m=+884.403136539" lastFinishedPulling="2026-03-12 13:24:31.333258466 +0000 UTC m=+889.781953862" observedRunningTime="2026-03-12 13:24:32.480773129 +0000 UTC m=+890.929468535" watchObservedRunningTime="2026-03-12 13:24:32.484216447 +0000 UTC m=+890.932911843" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.676123 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h"] Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.677339 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.684099 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hj2n7" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.690572 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h"] Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.709705 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-94rbc"] Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.710655 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.712800 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.732391 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ef796a94-b10d-4d18-ae88-f64bc3a6b87d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-94rbc\" (UID: \"ef796a94-b10d-4d18-ae88-f64bc3a6b87d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.732664 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92frx\" (UniqueName: \"kubernetes.io/projected/ef796a94-b10d-4d18-ae88-f64bc3a6b87d-kube-api-access-92frx\") pod \"nmstate-webhook-5f558f5558-94rbc\" (UID: \"ef796a94-b10d-4d18-ae88-f64bc3a6b87d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.732821 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mcn\" (UniqueName: \"kubernetes.io/projected/7855d7b1-c7cf-4b63-9313-051a391fcf43-kube-api-access-c6mcn\") pod \"nmstate-metrics-9b8c8685d-b2s5h\" (UID: \"7855d7b1-c7cf-4b63-9313-051a391fcf43\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.771484 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-94rbc"] Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.782555 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-rbsjl"] Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.783757 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.833544 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klgt6\" (UniqueName: \"kubernetes.io/projected/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-kube-api-access-klgt6\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.833920 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ef796a94-b10d-4d18-ae88-f64bc3a6b87d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-94rbc\" (UID: \"ef796a94-b10d-4d18-ae88-f64bc3a6b87d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.833946 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-dbus-socket\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.833975 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92frx\" (UniqueName: \"kubernetes.io/projected/ef796a94-b10d-4d18-ae88-f64bc3a6b87d-kube-api-access-92frx\") pod \"nmstate-webhook-5f558f5558-94rbc\" (UID: \"ef796a94-b10d-4d18-ae88-f64bc3a6b87d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.834005 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-nmstate-lock\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.835079 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mcn\" (UniqueName: \"kubernetes.io/projected/7855d7b1-c7cf-4b63-9313-051a391fcf43-kube-api-access-c6mcn\") pod \"nmstate-metrics-9b8c8685d-b2s5h\" (UID: \"7855d7b1-c7cf-4b63-9313-051a391fcf43\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.835343 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-ovs-socket\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.843948 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ef796a94-b10d-4d18-ae88-f64bc3a6b87d-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-94rbc\" (UID: \"ef796a94-b10d-4d18-ae88-f64bc3a6b87d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.851332 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4"] Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.852159 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.859986 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.860935 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-s925z" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.861160 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.867338 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mcn\" (UniqueName: \"kubernetes.io/projected/7855d7b1-c7cf-4b63-9313-051a391fcf43-kube-api-access-c6mcn\") pod \"nmstate-metrics-9b8c8685d-b2s5h\" (UID: \"7855d7b1-c7cf-4b63-9313-051a391fcf43\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.867594 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4"] Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.883009 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92frx\" (UniqueName: \"kubernetes.io/projected/ef796a94-b10d-4d18-ae88-f64bc3a6b87d-kube-api-access-92frx\") pod \"nmstate-webhook-5f558f5558-94rbc\" (UID: \"ef796a94-b10d-4d18-ae88-f64bc3a6b87d\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.936088 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8whd5\" (UniqueName: \"kubernetes.io/projected/af2d568b-9719-4da9-b0e8-e28d314ed860-kube-api-access-8whd5\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.936454 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klgt6\" (UniqueName: \"kubernetes.io/projected/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-kube-api-access-klgt6\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.936944 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-dbus-socket\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.937497 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/af2d568b-9719-4da9-b0e8-e28d314ed860-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.937661 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-nmstate-lock\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.937831 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/af2d568b-9719-4da9-b0e8-e28d314ed860-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.937967 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-ovs-socket\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.937382 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-dbus-socket\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.937772 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-nmstate-lock\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.938089 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-ovs-socket\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.954581 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klgt6\" (UniqueName: \"kubernetes.io/projected/d8309ffe-a26c-44a8-84e2-7b7ec10982a8-kube-api-access-klgt6\") pod \"nmstate-handler-rbsjl\" (UID: \"d8309ffe-a26c-44a8-84e2-7b7ec10982a8\") " pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:34 crc kubenswrapper[4778]: I0312 13:24:34.997168 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.023174 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.038776 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/af2d568b-9719-4da9-b0e8-e28d314ed860-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.038858 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8whd5\" (UniqueName: \"kubernetes.io/projected/af2d568b-9719-4da9-b0e8-e28d314ed860-kube-api-access-8whd5\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.038913 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/af2d568b-9719-4da9-b0e8-e28d314ed860-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:35 crc kubenswrapper[4778]: E0312 13:24:35.039058 4778 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 12 13:24:35 crc kubenswrapper[4778]: E0312 13:24:35.039130 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af2d568b-9719-4da9-b0e8-e28d314ed860-plugin-serving-cert podName:af2d568b-9719-4da9-b0e8-e28d314ed860 nodeName:}" failed. No retries permitted until 2026-03-12 13:24:35.539098969 +0000 UTC m=+893.987794365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/af2d568b-9719-4da9-b0e8-e28d314ed860-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-jbxx4" (UID: "af2d568b-9719-4da9-b0e8-e28d314ed860") : secret "plugin-serving-cert" not found Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.040160 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/af2d568b-9719-4da9-b0e8-e28d314ed860-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.060763 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8whd5\" (UniqueName: \"kubernetes.io/projected/af2d568b-9719-4da9-b0e8-e28d314ed860-kube-api-access-8whd5\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.107985 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.129279 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-958f4d6df-8h8cb"] Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.130456 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.139789 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-958f4d6df-8h8cb"] Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.140807 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-oauth-serving-cert\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: W0312 13:24:35.140930 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8309ffe_a26c_44a8_84e2_7b7ec10982a8.slice/crio-81b278cc5e7a181ca087fed20e2d8d360596a3f92009d968b804af7d71137571 WatchSource:0}: Error finding container 81b278cc5e7a181ca087fed20e2d8d360596a3f92009d968b804af7d71137571: Status 404 returned error can't find the container with id 81b278cc5e7a181ca087fed20e2d8d360596a3f92009d968b804af7d71137571 Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.141574 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbvd\" (UniqueName: \"kubernetes.io/projected/21091ec0-7369-4f02-949c-ca61ed2efad3-kube-api-access-hwbvd\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.141875 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-trusted-ca-bundle\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.142043 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21091ec0-7369-4f02-949c-ca61ed2efad3-console-oauth-config\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.142211 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21091ec0-7369-4f02-949c-ca61ed2efad3-console-serving-cert\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.142328 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-console-config\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.142557 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-service-ca\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.244091 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-oauth-serving-cert\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.244477 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbvd\" (UniqueName: \"kubernetes.io/projected/21091ec0-7369-4f02-949c-ca61ed2efad3-kube-api-access-hwbvd\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.244508 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-trusted-ca-bundle\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.244865 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21091ec0-7369-4f02-949c-ca61ed2efad3-console-oauth-config\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.245516 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-oauth-serving-cert\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.245743 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21091ec0-7369-4f02-949c-ca61ed2efad3-console-serving-cert\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.245787 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-console-config\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.245827 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-service-ca\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.247815 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-trusted-ca-bundle\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.247888 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-console-config\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.248737 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21091ec0-7369-4f02-949c-ca61ed2efad3-service-ca\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.253102 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21091ec0-7369-4f02-949c-ca61ed2efad3-console-oauth-config\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.255213 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21091ec0-7369-4f02-949c-ca61ed2efad3-console-serving-cert\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.266045 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbvd\" (UniqueName: \"kubernetes.io/projected/21091ec0-7369-4f02-949c-ca61ed2efad3-kube-api-access-hwbvd\") pod \"console-958f4d6df-8h8cb\" (UID: \"21091ec0-7369-4f02-949c-ca61ed2efad3\") " pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.577339 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.579158 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/af2d568b-9719-4da9-b0e8-e28d314ed860-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.582021 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/af2d568b-9719-4da9-b0e8-e28d314ed860-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-jbxx4\" (UID: \"af2d568b-9719-4da9-b0e8-e28d314ed860\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.587714 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rbsjl" event={"ID":"d8309ffe-a26c-44a8-84e2-7b7ec10982a8","Type":"ContainerStarted","Data":"81b278cc5e7a181ca087fed20e2d8d360596a3f92009d968b804af7d71137571"} Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.801148 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h"] Mar 12 13:24:35 crc kubenswrapper[4778]: W0312 13:24:35.814354 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7855d7b1_c7cf_4b63_9313_051a391fcf43.slice/crio-d895f3bf40dc0ae5f48efaff396a677d277046ea34568bfb3a907c76dcd211da WatchSource:0}: Error finding container d895f3bf40dc0ae5f48efaff396a677d277046ea34568bfb3a907c76dcd211da: Status 404 returned error can't find the container with id d895f3bf40dc0ae5f48efaff396a677d277046ea34568bfb3a907c76dcd211da Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.816675 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.899053 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-94rbc"] Mar 12 13:24:35 crc kubenswrapper[4778]: I0312 13:24:35.969435 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-958f4d6df-8h8cb"] Mar 12 13:24:35 crc kubenswrapper[4778]: W0312 13:24:35.988200 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21091ec0_7369_4f02_949c_ca61ed2efad3.slice/crio-3e63c6e680e69252e84320ddf99ca63c93d07c8fd23ecc062ca7cc30ce720b4a WatchSource:0}: Error finding container 3e63c6e680e69252e84320ddf99ca63c93d07c8fd23ecc062ca7cc30ce720b4a: Status 404 returned error can't find the container with id 3e63c6e680e69252e84320ddf99ca63c93d07c8fd23ecc062ca7cc30ce720b4a Mar 12 13:24:36 crc kubenswrapper[4778]: I0312 13:24:36.076229 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4"] Mar 12 13:24:36 crc kubenswrapper[4778]: I0312 13:24:36.595434 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" event={"ID":"af2d568b-9719-4da9-b0e8-e28d314ed860","Type":"ContainerStarted","Data":"3930912cf9cf2867c2bf1870b08d2e416418de2845bd14af9b08e8f4e4e8ffda"} Mar 12 13:24:36 crc kubenswrapper[4778]: I0312 13:24:36.597051 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-958f4d6df-8h8cb" event={"ID":"21091ec0-7369-4f02-949c-ca61ed2efad3","Type":"ContainerStarted","Data":"64d95a47cc1d737b6ce99e7266eadeebc2fd758b4619d4cd8bf19acb253bf593"} Mar 12 13:24:36 crc kubenswrapper[4778]: I0312 13:24:36.597080 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-958f4d6df-8h8cb" event={"ID":"21091ec0-7369-4f02-949c-ca61ed2efad3","Type":"ContainerStarted","Data":"3e63c6e680e69252e84320ddf99ca63c93d07c8fd23ecc062ca7cc30ce720b4a"} Mar 12 13:24:36 crc kubenswrapper[4778]: I0312 13:24:36.598193 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" event={"ID":"ef796a94-b10d-4d18-ae88-f64bc3a6b87d","Type":"ContainerStarted","Data":"6699f1ed9504b8bd4081021164004700a3296a23da446a20b0f679599f43ed90"} Mar 12 13:24:36 crc kubenswrapper[4778]: I0312 13:24:36.600023 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" event={"ID":"7855d7b1-c7cf-4b63-9313-051a391fcf43","Type":"ContainerStarted","Data":"d895f3bf40dc0ae5f48efaff396a677d277046ea34568bfb3a907c76dcd211da"} Mar 12 13:24:36 crc kubenswrapper[4778]: I0312 13:24:36.620286 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-958f4d6df-8h8cb" podStartSLOduration=1.620259022 podStartE2EDuration="1.620259022s" podCreationTimestamp="2026-03-12 13:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:24:36.616202516 +0000 UTC m=+895.064897912" watchObservedRunningTime="2026-03-12 13:24:36.620259022 +0000 UTC m=+895.068954418" Mar 12 13:24:37 crc kubenswrapper[4778]: I0312 13:24:37.113985 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:37 crc kubenswrapper[4778]: I0312 13:24:37.162095 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:37 crc kubenswrapper[4778]: I0312 13:24:37.350992 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tpgqw"] Mar 12 13:24:38 crc kubenswrapper[4778]: I0312 13:24:38.653734 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tpgqw" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="registry-server" containerID="cri-o://4afc7793b8453c98e2325cef91c55591417e0eb2c857e8d9e7956e6b80421763" gracePeriod=2 Mar 12 13:24:39 crc kubenswrapper[4778]: I0312 13:24:39.708177 4778 generic.go:334] "Generic (PLEG): container finished" podID="b4c13351-b8fa-4224-a09b-942200d398b1" containerID="4afc7793b8453c98e2325cef91c55591417e0eb2c857e8d9e7956e6b80421763" exitCode=0 Mar 12 13:24:39 crc kubenswrapper[4778]: I0312 13:24:39.708273 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgqw" event={"ID":"b4c13351-b8fa-4224-a09b-942200d398b1","Type":"ContainerDied","Data":"4afc7793b8453c98e2325cef91c55591417e0eb2c857e8d9e7956e6b80421763"} Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.295611 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.466209 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-catalog-content\") pod \"b4c13351-b8fa-4224-a09b-942200d398b1\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.466842 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99g7g\" (UniqueName: \"kubernetes.io/projected/b4c13351-b8fa-4224-a09b-942200d398b1-kube-api-access-99g7g\") pod \"b4c13351-b8fa-4224-a09b-942200d398b1\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.466891 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-utilities\") pod \"b4c13351-b8fa-4224-a09b-942200d398b1\" (UID: \"b4c13351-b8fa-4224-a09b-942200d398b1\") " Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.467705 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-utilities" (OuterVolumeSpecName: "utilities") pod "b4c13351-b8fa-4224-a09b-942200d398b1" (UID: "b4c13351-b8fa-4224-a09b-942200d398b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.472792 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c13351-b8fa-4224-a09b-942200d398b1-kube-api-access-99g7g" (OuterVolumeSpecName: "kube-api-access-99g7g") pod "b4c13351-b8fa-4224-a09b-942200d398b1" (UID: "b4c13351-b8fa-4224-a09b-942200d398b1"). InnerVolumeSpecName "kube-api-access-99g7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.568589 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.568618 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99g7g\" (UniqueName: \"kubernetes.io/projected/b4c13351-b8fa-4224-a09b-942200d398b1-kube-api-access-99g7g\") on node \"crc\" DevicePath \"\"" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.596482 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4c13351-b8fa-4224-a09b-942200d398b1" (UID: "b4c13351-b8fa-4224-a09b-942200d398b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.669600 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c13351-b8fa-4224-a09b-942200d398b1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.724253 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" event={"ID":"7855d7b1-c7cf-4b63-9313-051a391fcf43","Type":"ContainerStarted","Data":"3d8aa893a82e4d0fc1feb25640d8f5da687769e944fb0bef634f4a574a727877"} Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.726768 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" event={"ID":"ef796a94-b10d-4d18-ae88-f64bc3a6b87d","Type":"ContainerStarted","Data":"dd669951dbca11310165ad2a5acc960c5b8032a1f80ebce35c75a79f6d31256d"} Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.726921 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.730163 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpgqw" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.730169 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgqw" event={"ID":"b4c13351-b8fa-4224-a09b-942200d398b1","Type":"ContainerDied","Data":"18bee62b19e956e2c3d75695d30248e10ca9db095c911e5857cfc9283eb0b145"} Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.730249 4778 scope.go:117] "RemoveContainer" containerID="4afc7793b8453c98e2325cef91c55591417e0eb2c857e8d9e7956e6b80421763" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.731914 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-rbsjl" event={"ID":"d8309ffe-a26c-44a8-84e2-7b7ec10982a8","Type":"ContainerStarted","Data":"29d434fae31803af5a7f922b85304b26cfa18b09b618b2f8ebeda48d234d1a6f"} Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.732085 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.734243 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" event={"ID":"af2d568b-9719-4da9-b0e8-e28d314ed860","Type":"ContainerStarted","Data":"be918cffe388f198c074ff0e1b9ef80800945389cd3920982e304a43c77394ec"} Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.759603 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" podStartSLOduration=2.325521461 podStartE2EDuration="7.759572917s" podCreationTimestamp="2026-03-12 13:24:34 +0000 UTC" firstStartedPulling="2026-03-12 13:24:35.920357791 +0000 UTC m=+894.369053187" lastFinishedPulling="2026-03-12 13:24:41.354409247 +0000 UTC m=+899.803104643" observedRunningTime="2026-03-12 13:24:41.753972928 +0000 UTC m=+900.202668334" watchObservedRunningTime="2026-03-12 13:24:41.759572917 +0000 UTC m=+900.208268313" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.763397 4778 scope.go:117] "RemoveContainer" containerID="43f23e51ea1a4786666d9356631a20c154ceff599adf74c4b7fdae769a58165f" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.780269 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-rbsjl" podStartSLOduration=1.574613844 podStartE2EDuration="7.780239276s" podCreationTimestamp="2026-03-12 13:24:34 +0000 UTC" firstStartedPulling="2026-03-12 13:24:35.150128174 +0000 UTC m=+893.598823580" lastFinishedPulling="2026-03-12 13:24:41.355753616 +0000 UTC m=+899.804449012" observedRunningTime="2026-03-12 13:24:41.777523889 +0000 UTC m=+900.226219295" watchObservedRunningTime="2026-03-12 13:24:41.780239276 +0000 UTC m=+900.228934672" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.801200 4778 scope.go:117] "RemoveContainer" containerID="24fd7a17a52a8b61e1c1382d3f50546a92de510fde86d6489d611167d57ee7a2" Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.806974 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tpgqw"] Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.838739 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tpgqw"] Mar 12 13:24:41 crc kubenswrapper[4778]: I0312 13:24:41.846284 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jbxx4" podStartSLOduration=2.588493697 podStartE2EDuration="7.846247038s" podCreationTimestamp="2026-03-12 13:24:34 +0000 UTC" firstStartedPulling="2026-03-12 13:24:36.097894682 +0000 UTC m=+894.546590078" lastFinishedPulling="2026-03-12 13:24:41.355648023 +0000 UTC m=+899.804343419" observedRunningTime="2026-03-12 13:24:41.820633448 +0000 UTC m=+900.269328874" watchObservedRunningTime="2026-03-12 13:24:41.846247038 +0000 UTC m=+900.294942434" Mar 12 13:24:42 crc kubenswrapper[4778]: I0312 13:24:42.269761 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" path="/var/lib/kubelet/pods/b4c13351-b8fa-4224-a09b-942200d398b1/volumes" Mar 12 13:24:45 crc kubenswrapper[4778]: I0312 13:24:45.578056 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:45 crc kubenswrapper[4778]: I0312 13:24:45.578460 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:45 crc kubenswrapper[4778]: I0312 13:24:45.583703 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:45 crc kubenswrapper[4778]: I0312 13:24:45.767417 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-958f4d6df-8h8cb" Mar 12 13:24:45 crc kubenswrapper[4778]: I0312 13:24:45.821922 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xwwxp"] Mar 12 13:24:46 crc kubenswrapper[4778]: I0312 13:24:46.838678 4778 scope.go:117] "RemoveContainer" containerID="42a7fef965fea72fd4ae8fcc7e99e6b821d3626af8cb88a527c7193c956003a6" Mar 12 13:24:47 crc kubenswrapper[4778]: I0312 13:24:47.778068 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" event={"ID":"7855d7b1-c7cf-4b63-9313-051a391fcf43","Type":"ContainerStarted","Data":"2bad13e2195c34f7af1b387fa86c2c44b3f3c0d160ed233dff08d76f5e7daf6c"} Mar 12 13:24:50 crc kubenswrapper[4778]: I0312 13:24:50.135942 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-rbsjl" Mar 12 13:24:50 crc kubenswrapper[4778]: I0312 13:24:50.157028 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-b2s5h" podStartSLOduration=4.752099645 podStartE2EDuration="16.157000711s" podCreationTimestamp="2026-03-12 13:24:34 +0000 UTC" firstStartedPulling="2026-03-12 13:24:35.817503649 +0000 UTC m=+894.266199045" lastFinishedPulling="2026-03-12 13:24:47.222404715 +0000 UTC m=+905.671100111" observedRunningTime="2026-03-12 13:24:47.802081259 +0000 UTC m=+906.250776675" watchObservedRunningTime="2026-03-12 13:24:50.157000711 +0000 UTC m=+908.605696107" Mar 12 13:24:55 crc kubenswrapper[4778]: I0312 13:24:55.032741 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-94rbc" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.744692 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6"] Mar 12 13:25:07 crc kubenswrapper[4778]: E0312 13:25:07.745457 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="extract-content" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.745472 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="extract-content" Mar 12 13:25:07 crc kubenswrapper[4778]: E0312 13:25:07.745486 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="extract-utilities" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.745493 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="extract-utilities" Mar 12 13:25:07 crc kubenswrapper[4778]: E0312 13:25:07.745512 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="registry-server" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.745521 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="registry-server" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.745635 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c13351-b8fa-4224-a09b-942200d398b1" containerName="registry-server" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.746367 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.748584 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.765423 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6"] Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.851960 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.852021 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47j9d\" (UniqueName: \"kubernetes.io/projected/9090029d-2f37-457b-8425-3690da177434-kube-api-access-47j9d\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.852061 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.953647 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.953992 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.954091 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47j9d\" (UniqueName: \"kubernetes.io/projected/9090029d-2f37-457b-8425-3690da177434-kube-api-access-47j9d\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.954130 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.954546 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:07 crc kubenswrapper[4778]: I0312 13:25:07.976098 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47j9d\" (UniqueName: \"kubernetes.io/projected/9090029d-2f37-457b-8425-3690da177434-kube-api-access-47j9d\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:08 crc kubenswrapper[4778]: I0312 13:25:08.068458 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:08 crc kubenswrapper[4778]: I0312 13:25:08.457094 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6"] Mar 12 13:25:08 crc kubenswrapper[4778]: I0312 13:25:08.900404 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" event={"ID":"9090029d-2f37-457b-8425-3690da177434","Type":"ContainerStarted","Data":"a21914d65f0222503b56911bb4a1dc512ff6b6f36daa27eac18a63f864ad2b02"} Mar 12 13:25:08 crc kubenswrapper[4778]: I0312 13:25:08.900455 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" event={"ID":"9090029d-2f37-457b-8425-3690da177434","Type":"ContainerStarted","Data":"b9aa60e4a4d8c2467aa722d6a95be03da066e99dbe5fc5eb08f4ee10964bfad1"} Mar 12 13:25:09 crc kubenswrapper[4778]: I0312 13:25:09.910232 4778 generic.go:334] "Generic (PLEG): container finished" podID="9090029d-2f37-457b-8425-3690da177434" containerID="a21914d65f0222503b56911bb4a1dc512ff6b6f36daa27eac18a63f864ad2b02" exitCode=0 Mar 12 13:25:09 crc kubenswrapper[4778]: I0312 13:25:09.910294 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" event={"ID":"9090029d-2f37-457b-8425-3690da177434","Type":"ContainerDied","Data":"a21914d65f0222503b56911bb4a1dc512ff6b6f36daa27eac18a63f864ad2b02"} Mar 12 13:25:10 crc kubenswrapper[4778]: I0312 13:25:10.863279 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xwwxp" podUID="c825022c-79bc-44ae-bc64-ee9614aafe25" containerName="console" containerID="cri-o://4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c" gracePeriod=15 Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.226732 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xwwxp_c825022c-79bc-44ae-bc64-ee9614aafe25/console/0.log" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.226805 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.394842 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-serving-cert\") pod \"c825022c-79bc-44ae-bc64-ee9614aafe25\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.395291 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-oauth-config\") pod \"c825022c-79bc-44ae-bc64-ee9614aafe25\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.395326 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-oauth-serving-cert\") pod \"c825022c-79bc-44ae-bc64-ee9614aafe25\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.395375 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqbjv\" (UniqueName: \"kubernetes.io/projected/c825022c-79bc-44ae-bc64-ee9614aafe25-kube-api-access-rqbjv\") pod \"c825022c-79bc-44ae-bc64-ee9614aafe25\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.395433 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-service-ca\") pod \"c825022c-79bc-44ae-bc64-ee9614aafe25\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.395478 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-trusted-ca-bundle\") pod \"c825022c-79bc-44ae-bc64-ee9614aafe25\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.395514 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-console-config\") pod \"c825022c-79bc-44ae-bc64-ee9614aafe25\" (UID: \"c825022c-79bc-44ae-bc64-ee9614aafe25\") " Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.396529 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-console-config" (OuterVolumeSpecName: "console-config") pod "c825022c-79bc-44ae-bc64-ee9614aafe25" (UID: "c825022c-79bc-44ae-bc64-ee9614aafe25"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.396685 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c825022c-79bc-44ae-bc64-ee9614aafe25" (UID: "c825022c-79bc-44ae-bc64-ee9614aafe25"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.396715 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-service-ca" (OuterVolumeSpecName: "service-ca") pod "c825022c-79bc-44ae-bc64-ee9614aafe25" (UID: "c825022c-79bc-44ae-bc64-ee9614aafe25"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.397056 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c825022c-79bc-44ae-bc64-ee9614aafe25" (UID: "c825022c-79bc-44ae-bc64-ee9614aafe25"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.401085 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c825022c-79bc-44ae-bc64-ee9614aafe25" (UID: "c825022c-79bc-44ae-bc64-ee9614aafe25"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.401743 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c825022c-79bc-44ae-bc64-ee9614aafe25" (UID: "c825022c-79bc-44ae-bc64-ee9614aafe25"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.401805 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c825022c-79bc-44ae-bc64-ee9614aafe25-kube-api-access-rqbjv" (OuterVolumeSpecName: "kube-api-access-rqbjv") pod "c825022c-79bc-44ae-bc64-ee9614aafe25" (UID: "c825022c-79bc-44ae-bc64-ee9614aafe25"). InnerVolumeSpecName "kube-api-access-rqbjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.497365 4778 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-console-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.497441 4778 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.497467 4778 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c825022c-79bc-44ae-bc64-ee9614aafe25-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.497493 4778 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.497516 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqbjv\" (UniqueName: \"kubernetes.io/projected/c825022c-79bc-44ae-bc64-ee9614aafe25-kube-api-access-rqbjv\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.497542 4778 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-service-ca\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.497564 4778 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c825022c-79bc-44ae-bc64-ee9614aafe25-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.920683 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xwwxp_c825022c-79bc-44ae-bc64-ee9614aafe25/console/0.log" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.920724 4778 generic.go:334] "Generic (PLEG): container finished" podID="c825022c-79bc-44ae-bc64-ee9614aafe25" containerID="4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c" exitCode=2 Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.920748 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xwwxp" event={"ID":"c825022c-79bc-44ae-bc64-ee9614aafe25","Type":"ContainerDied","Data":"4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c"} Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.920774 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xwwxp" event={"ID":"c825022c-79bc-44ae-bc64-ee9614aafe25","Type":"ContainerDied","Data":"6f20116905733a7dbe8802503613a6b31a51c117f53f02f55e4cace656d26f20"} Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.920791 4778 scope.go:117] "RemoveContainer" containerID="4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.920792 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xwwxp" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.937327 4778 scope.go:117] "RemoveContainer" containerID="4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c" Mar 12 13:25:11 crc kubenswrapper[4778]: E0312 13:25:11.937793 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c\": container with ID starting with 4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c not found: ID does not exist" containerID="4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.937828 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c"} err="failed to get container status \"4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c\": rpc error: code = NotFound desc = could not find container \"4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c\": container with ID starting with 4f4a64269de7f325ca6cad0c8f6bdffa97bc955d4a92c8f27548dcfdbd421f4c not found: ID does not exist" Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.953715 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xwwxp"] Mar 12 13:25:11 crc kubenswrapper[4778]: I0312 13:25:11.958118 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xwwxp"] Mar 12 13:25:12 crc kubenswrapper[4778]: I0312 13:25:12.262680 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c825022c-79bc-44ae-bc64-ee9614aafe25" path="/var/lib/kubelet/pods/c825022c-79bc-44ae-bc64-ee9614aafe25/volumes" Mar 12 13:25:12 crc kubenswrapper[4778]: I0312 13:25:12.931213 4778 generic.go:334] "Generic (PLEG): container finished" podID="9090029d-2f37-457b-8425-3690da177434" containerID="815dcddc546761ce579fc83410d6b0089762cd2da7d4030454ea7174a833b745" exitCode=0 Mar 12 13:25:12 crc kubenswrapper[4778]: I0312 13:25:12.931292 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" event={"ID":"9090029d-2f37-457b-8425-3690da177434","Type":"ContainerDied","Data":"815dcddc546761ce579fc83410d6b0089762cd2da7d4030454ea7174a833b745"} Mar 12 13:25:13 crc kubenswrapper[4778]: I0312 13:25:13.943365 4778 generic.go:334] "Generic (PLEG): container finished" podID="9090029d-2f37-457b-8425-3690da177434" containerID="e9a2263250dc38f09156c0b8d5a3489b7c876f2a032e5671b5417dd730d61037" exitCode=0 Mar 12 13:25:13 crc kubenswrapper[4778]: I0312 13:25:13.943441 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" event={"ID":"9090029d-2f37-457b-8425-3690da177434","Type":"ContainerDied","Data":"e9a2263250dc38f09156c0b8d5a3489b7c876f2a032e5671b5417dd730d61037"} Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.140197 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.239450 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47j9d\" (UniqueName: \"kubernetes.io/projected/9090029d-2f37-457b-8425-3690da177434-kube-api-access-47j9d\") pod \"9090029d-2f37-457b-8425-3690da177434\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.239559 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-bundle\") pod \"9090029d-2f37-457b-8425-3690da177434\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.239607 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-util\") pod \"9090029d-2f37-457b-8425-3690da177434\" (UID: \"9090029d-2f37-457b-8425-3690da177434\") " Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.240809 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-bundle" (OuterVolumeSpecName: "bundle") pod "9090029d-2f37-457b-8425-3690da177434" (UID: "9090029d-2f37-457b-8425-3690da177434"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.244441 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9090029d-2f37-457b-8425-3690da177434-kube-api-access-47j9d" (OuterVolumeSpecName: "kube-api-access-47j9d") pod "9090029d-2f37-457b-8425-3690da177434" (UID: "9090029d-2f37-457b-8425-3690da177434"). InnerVolumeSpecName "kube-api-access-47j9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.250550 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-util" (OuterVolumeSpecName: "util") pod "9090029d-2f37-457b-8425-3690da177434" (UID: "9090029d-2f37-457b-8425-3690da177434"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.341427 4778 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-util\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.341464 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47j9d\" (UniqueName: \"kubernetes.io/projected/9090029d-2f37-457b-8425-3690da177434-kube-api-access-47j9d\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.341478 4778 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9090029d-2f37-457b-8425-3690da177434-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.955578 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" event={"ID":"9090029d-2f37-457b-8425-3690da177434","Type":"ContainerDied","Data":"b9aa60e4a4d8c2467aa722d6a95be03da066e99dbe5fc5eb08f4ee10964bfad1"} Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.955911 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9aa60e4a4d8c2467aa722d6a95be03da066e99dbe5fc5eb08f4ee10964bfad1" Mar 12 13:25:15 crc kubenswrapper[4778]: I0312 13:25:15.955978 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.715170 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx"] Mar 12 13:25:26 crc kubenswrapper[4778]: E0312 13:25:26.716394 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c825022c-79bc-44ae-bc64-ee9614aafe25" containerName="console" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.716413 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c825022c-79bc-44ae-bc64-ee9614aafe25" containerName="console" Mar 12 13:25:26 crc kubenswrapper[4778]: E0312 13:25:26.716439 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9090029d-2f37-457b-8425-3690da177434" containerName="pull" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.716447 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9090029d-2f37-457b-8425-3690da177434" containerName="pull" Mar 12 13:25:26 crc kubenswrapper[4778]: E0312 13:25:26.716474 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9090029d-2f37-457b-8425-3690da177434" containerName="util" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.716482 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9090029d-2f37-457b-8425-3690da177434" containerName="util" Mar 12 13:25:26 crc kubenswrapper[4778]: E0312 13:25:26.716500 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9090029d-2f37-457b-8425-3690da177434" containerName="extract" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.716507 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9090029d-2f37-457b-8425-3690da177434" containerName="extract" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.716854 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c825022c-79bc-44ae-bc64-ee9614aafe25" containerName="console" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.716880 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9090029d-2f37-457b-8425-3690da177434" containerName="extract" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.717650 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.728840 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.732025 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.738634 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.738918 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.739246 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-rk9jh" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.749816 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx"] Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.785873 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt2z4\" (UniqueName: \"kubernetes.io/projected/a5a6d344-0a75-422d-acd9-fe8887b03110-kube-api-access-kt2z4\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.786064 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5a6d344-0a75-422d-acd9-fe8887b03110-webhook-cert\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.786169 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5a6d344-0a75-422d-acd9-fe8887b03110-apiservice-cert\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.888213 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5a6d344-0a75-422d-acd9-fe8887b03110-apiservice-cert\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.889789 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt2z4\" (UniqueName: \"kubernetes.io/projected/a5a6d344-0a75-422d-acd9-fe8887b03110-kube-api-access-kt2z4\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.889915 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5a6d344-0a75-422d-acd9-fe8887b03110-webhook-cert\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.896114 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5a6d344-0a75-422d-acd9-fe8887b03110-webhook-cert\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.896151 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5a6d344-0a75-422d-acd9-fe8887b03110-apiservice-cert\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.907475 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt2z4\" (UniqueName: \"kubernetes.io/projected/a5a6d344-0a75-422d-acd9-fe8887b03110-kube-api-access-kt2z4\") pod \"metallb-operator-controller-manager-54d5c4b6c7-gh4lx\" (UID: \"a5a6d344-0a75-422d-acd9-fe8887b03110\") " pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.973511 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq"] Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.974811 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.977278 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.978056 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k2kwx" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.978086 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.990672 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-apiservice-cert\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.990735 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-webhook-cert\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.990793 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwnxj\" (UniqueName: \"kubernetes.io/projected/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-kube-api-access-rwnxj\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:26 crc kubenswrapper[4778]: I0312 13:25:26.996825 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq"] Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.049632 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.091509 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-apiservice-cert\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.091590 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-webhook-cert\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.091637 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwnxj\" (UniqueName: \"kubernetes.io/projected/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-kube-api-access-rwnxj\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.096856 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-webhook-cert\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.097436 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-apiservice-cert\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.111089 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwnxj\" (UniqueName: \"kubernetes.io/projected/6ac207b6-1710-47af-8fe9-b0c3adbce0ab-kube-api-access-rwnxj\") pod \"metallb-operator-webhook-server-68f5db54d6-zstmq\" (UID: \"6ac207b6-1710-47af-8fe9-b0c3adbce0ab\") " pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.291460 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.686992 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx"] Mar 12 13:25:27 crc kubenswrapper[4778]: I0312 13:25:27.913484 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq"] Mar 12 13:25:27 crc kubenswrapper[4778]: W0312 13:25:27.924756 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ac207b6_1710_47af_8fe9_b0c3adbce0ab.slice/crio-6b6435c5a162a19fbf3fd787d2bd4b066257f5690be74c8c365417850ccb0c59 WatchSource:0}: Error finding container 6b6435c5a162a19fbf3fd787d2bd4b066257f5690be74c8c365417850ccb0c59: Status 404 returned error can't find the container with id 6b6435c5a162a19fbf3fd787d2bd4b066257f5690be74c8c365417850ccb0c59 Mar 12 13:25:28 crc kubenswrapper[4778]: I0312 13:25:28.054978 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" event={"ID":"a5a6d344-0a75-422d-acd9-fe8887b03110","Type":"ContainerStarted","Data":"736b3dbbffc16edb5fb0ca90cdffd191d6903c523af75123ba414476c8c2973b"} Mar 12 13:25:28 crc kubenswrapper[4778]: I0312 13:25:28.056415 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" event={"ID":"6ac207b6-1710-47af-8fe9-b0c3adbce0ab","Type":"ContainerStarted","Data":"6b6435c5a162a19fbf3fd787d2bd4b066257f5690be74c8c365417850ccb0c59"} Mar 12 13:25:28 crc kubenswrapper[4778]: I0312 13:25:28.558250 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:25:28 crc kubenswrapper[4778]: I0312 13:25:28.558649 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:25:38 crc kubenswrapper[4778]: I0312 13:25:38.237231 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" event={"ID":"6ac207b6-1710-47af-8fe9-b0c3adbce0ab","Type":"ContainerStarted","Data":"e3ec90eb649c9b7215dd5d53c7c610b2c43b328997544f79890d00ff5263b8fd"} Mar 12 13:25:38 crc kubenswrapper[4778]: I0312 13:25:38.237755 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:38 crc kubenswrapper[4778]: I0312 13:25:38.238800 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" event={"ID":"a5a6d344-0a75-422d-acd9-fe8887b03110","Type":"ContainerStarted","Data":"f8433cd677712c90f8cdb14e6420e1a6dea126ed1733e27ae61ed8dd45da56b4"} Mar 12 13:25:38 crc kubenswrapper[4778]: I0312 13:25:38.238971 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:25:38 crc kubenswrapper[4778]: I0312 13:25:38.264110 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" podStartSLOduration=2.9697980189999997 podStartE2EDuration="12.26408576s" podCreationTimestamp="2026-03-12 13:25:26 +0000 UTC" firstStartedPulling="2026-03-12 13:25:27.927514978 +0000 UTC m=+946.376210374" lastFinishedPulling="2026-03-12 13:25:37.221802719 +0000 UTC m=+955.670498115" observedRunningTime="2026-03-12 13:25:38.259649424 +0000 UTC m=+956.708344820" watchObservedRunningTime="2026-03-12 13:25:38.26408576 +0000 UTC m=+956.712781156" Mar 12 13:25:38 crc kubenswrapper[4778]: I0312 13:25:38.285705 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" podStartSLOduration=2.782420308 podStartE2EDuration="12.285691006s" podCreationTimestamp="2026-03-12 13:25:26 +0000 UTC" firstStartedPulling="2026-03-12 13:25:27.698334325 +0000 UTC m=+946.147029721" lastFinishedPulling="2026-03-12 13:25:37.201605023 +0000 UTC m=+955.650300419" observedRunningTime="2026-03-12 13:25:38.283636278 +0000 UTC m=+956.732331674" watchObservedRunningTime="2026-03-12 13:25:38.285691006 +0000 UTC m=+956.734386392" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.278935 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pltxt"] Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.280578 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.288540 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pltxt"] Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.414124 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-utilities\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.414315 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-catalog-content\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.414484 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8rt\" (UniqueName: \"kubernetes.io/projected/3418515c-3077-4237-9aa9-596ed9d3c137-kube-api-access-2b8rt\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.515331 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-catalog-content\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.515423 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b8rt\" (UniqueName: \"kubernetes.io/projected/3418515c-3077-4237-9aa9-596ed9d3c137-kube-api-access-2b8rt\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.515473 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-utilities\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.515875 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-catalog-content\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.515901 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-utilities\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.537618 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b8rt\" (UniqueName: \"kubernetes.io/projected/3418515c-3077-4237-9aa9-596ed9d3c137-kube-api-access-2b8rt\") pod \"redhat-marketplace-pltxt\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.595022 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:41 crc kubenswrapper[4778]: I0312 13:25:41.919330 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pltxt"] Mar 12 13:25:41 crc kubenswrapper[4778]: W0312 13:25:41.925760 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3418515c_3077_4237_9aa9_596ed9d3c137.slice/crio-b37770ad61325c6792ceeea5e8eb5ca55edd1557d07b2fc176c76ded639b7940 WatchSource:0}: Error finding container b37770ad61325c6792ceeea5e8eb5ca55edd1557d07b2fc176c76ded639b7940: Status 404 returned error can't find the container with id b37770ad61325c6792ceeea5e8eb5ca55edd1557d07b2fc176c76ded639b7940 Mar 12 13:25:42 crc kubenswrapper[4778]: I0312 13:25:42.260632 4778 generic.go:334] "Generic (PLEG): container finished" podID="3418515c-3077-4237-9aa9-596ed9d3c137" containerID="6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864" exitCode=0 Mar 12 13:25:42 crc kubenswrapper[4778]: I0312 13:25:42.269626 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pltxt" event={"ID":"3418515c-3077-4237-9aa9-596ed9d3c137","Type":"ContainerDied","Data":"6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864"} Mar 12 13:25:42 crc kubenswrapper[4778]: I0312 13:25:42.269700 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pltxt" event={"ID":"3418515c-3077-4237-9aa9-596ed9d3c137","Type":"ContainerStarted","Data":"b37770ad61325c6792ceeea5e8eb5ca55edd1557d07b2fc176c76ded639b7940"} Mar 12 13:25:43 crc kubenswrapper[4778]: I0312 13:25:43.267697 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pltxt" event={"ID":"3418515c-3077-4237-9aa9-596ed9d3c137","Type":"ContainerStarted","Data":"33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a"} Mar 12 13:25:44 crc kubenswrapper[4778]: I0312 13:25:44.274432 4778 generic.go:334] "Generic (PLEG): container finished" podID="3418515c-3077-4237-9aa9-596ed9d3c137" containerID="33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a" exitCode=0 Mar 12 13:25:44 crc kubenswrapper[4778]: I0312 13:25:44.274480 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pltxt" event={"ID":"3418515c-3077-4237-9aa9-596ed9d3c137","Type":"ContainerDied","Data":"33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a"} Mar 12 13:25:45 crc kubenswrapper[4778]: I0312 13:25:45.279955 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pltxt" event={"ID":"3418515c-3077-4237-9aa9-596ed9d3c137","Type":"ContainerStarted","Data":"1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47"} Mar 12 13:25:45 crc kubenswrapper[4778]: I0312 13:25:45.295800 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pltxt" podStartSLOduration=1.707159239 podStartE2EDuration="4.295783543s" podCreationTimestamp="2026-03-12 13:25:41 +0000 UTC" firstStartedPulling="2026-03-12 13:25:42.2627421 +0000 UTC m=+960.711437496" lastFinishedPulling="2026-03-12 13:25:44.851366404 +0000 UTC m=+963.300061800" observedRunningTime="2026-03-12 13:25:45.294301731 +0000 UTC m=+963.742997137" watchObservedRunningTime="2026-03-12 13:25:45.295783543 +0000 UTC m=+963.744478939" Mar 12 13:25:47 crc kubenswrapper[4778]: I0312 13:25:47.363656 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-68f5db54d6-zstmq" Mar 12 13:25:51 crc kubenswrapper[4778]: I0312 13:25:51.596064 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:51 crc kubenswrapper[4778]: I0312 13:25:51.596415 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:51 crc kubenswrapper[4778]: I0312 13:25:51.642558 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:52 crc kubenswrapper[4778]: I0312 13:25:52.657442 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:52 crc kubenswrapper[4778]: I0312 13:25:52.854373 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pltxt"] Mar 12 13:25:54 crc kubenswrapper[4778]: I0312 13:25:54.396794 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pltxt" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" containerName="registry-server" containerID="cri-o://1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47" gracePeriod=2 Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.282480 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.404791 4778 generic.go:334] "Generic (PLEG): container finished" podID="3418515c-3077-4237-9aa9-596ed9d3c137" containerID="1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47" exitCode=0 Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.404839 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pltxt" event={"ID":"3418515c-3077-4237-9aa9-596ed9d3c137","Type":"ContainerDied","Data":"1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47"} Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.404869 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pltxt" event={"ID":"3418515c-3077-4237-9aa9-596ed9d3c137","Type":"ContainerDied","Data":"b37770ad61325c6792ceeea5e8eb5ca55edd1557d07b2fc176c76ded639b7940"} Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.404889 4778 scope.go:117] "RemoveContainer" containerID="1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.405003 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pltxt" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.426583 4778 scope.go:117] "RemoveContainer" containerID="33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.472088 4778 scope.go:117] "RemoveContainer" containerID="6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.486625 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-utilities\") pod \"3418515c-3077-4237-9aa9-596ed9d3c137\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.486699 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b8rt\" (UniqueName: \"kubernetes.io/projected/3418515c-3077-4237-9aa9-596ed9d3c137-kube-api-access-2b8rt\") pod \"3418515c-3077-4237-9aa9-596ed9d3c137\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.486751 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-catalog-content\") pod \"3418515c-3077-4237-9aa9-596ed9d3c137\" (UID: \"3418515c-3077-4237-9aa9-596ed9d3c137\") " Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.495623 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3418515c-3077-4237-9aa9-596ed9d3c137-kube-api-access-2b8rt" (OuterVolumeSpecName: "kube-api-access-2b8rt") pod "3418515c-3077-4237-9aa9-596ed9d3c137" (UID: "3418515c-3077-4237-9aa9-596ed9d3c137"). InnerVolumeSpecName "kube-api-access-2b8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.499026 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-utilities" (OuterVolumeSpecName: "utilities") pod "3418515c-3077-4237-9aa9-596ed9d3c137" (UID: "3418515c-3077-4237-9aa9-596ed9d3c137"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.508426 4778 scope.go:117] "RemoveContainer" containerID="1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47" Mar 12 13:25:55 crc kubenswrapper[4778]: E0312 13:25:55.509050 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47\": container with ID starting with 1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47 not found: ID does not exist" containerID="1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.509089 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47"} err="failed to get container status \"1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47\": rpc error: code = NotFound desc = could not find container \"1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47\": container with ID starting with 1d6d9105a852faf10ecfbf80e59f638b15c2f9a5383dbc2a9c8fb463b364dc47 not found: ID does not exist" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.509115 4778 scope.go:117] "RemoveContainer" containerID="33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a" Mar 12 13:25:55 crc kubenswrapper[4778]: E0312 13:25:55.509575 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a\": container with ID starting with 33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a not found: ID does not exist" containerID="33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.509613 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a"} err="failed to get container status \"33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a\": rpc error: code = NotFound desc = could not find container \"33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a\": container with ID starting with 33ac90bd1039472d943260eb6731eac265a3d0a77ac9f0a701175b768502761a not found: ID does not exist" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.509639 4778 scope.go:117] "RemoveContainer" containerID="6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864" Mar 12 13:25:55 crc kubenswrapper[4778]: E0312 13:25:55.509974 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864\": container with ID starting with 6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864 not found: ID does not exist" containerID="6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.510006 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864"} err="failed to get container status \"6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864\": rpc error: code = NotFound desc = could not find container \"6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864\": container with ID starting with 6b4907767b6af049385fd7daea9a886797c4e1193a43d026677035133dec2864 not found: ID does not exist" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.527253 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3418515c-3077-4237-9aa9-596ed9d3c137" (UID: "3418515c-3077-4237-9aa9-596ed9d3c137"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.587733 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.587764 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b8rt\" (UniqueName: \"kubernetes.io/projected/3418515c-3077-4237-9aa9-596ed9d3c137-kube-api-access-2b8rt\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.587774 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3418515c-3077-4237-9aa9-596ed9d3c137-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.731008 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pltxt"] Mar 12 13:25:55 crc kubenswrapper[4778]: I0312 13:25:55.736980 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pltxt"] Mar 12 13:25:56 crc kubenswrapper[4778]: I0312 13:25:56.260957 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" path="/var/lib/kubelet/pods/3418515c-3077-4237-9aa9-596ed9d3c137/volumes" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.264873 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-swx4c"] Mar 12 13:25:58 crc kubenswrapper[4778]: E0312 13:25:58.269271 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" containerName="registry-server" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.269299 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" containerName="registry-server" Mar 12 13:25:58 crc kubenswrapper[4778]: E0312 13:25:58.269318 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" containerName="extract-content" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.269330 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" containerName="extract-content" Mar 12 13:25:58 crc kubenswrapper[4778]: E0312 13:25:58.269347 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" containerName="extract-utilities" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.269355 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" containerName="extract-utilities" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.269475 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3418515c-3077-4237-9aa9-596ed9d3c137" containerName="registry-server" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.270487 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.290446 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swx4c"] Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.319061 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-utilities\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.319388 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57qk\" (UniqueName: \"kubernetes.io/projected/19812411-eae6-4792-9f00-64a6604924fb-kube-api-access-n57qk\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.319523 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-catalog-content\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.420452 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-utilities\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.420515 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n57qk\" (UniqueName: \"kubernetes.io/projected/19812411-eae6-4792-9f00-64a6604924fb-kube-api-access-n57qk\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.420554 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-catalog-content\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.421054 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-catalog-content\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.421243 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-utilities\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.449583 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n57qk\" (UniqueName: \"kubernetes.io/projected/19812411-eae6-4792-9f00-64a6604924fb-kube-api-access-n57qk\") pod \"community-operators-swx4c\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.558233 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.558289 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:25:58 crc kubenswrapper[4778]: I0312 13:25:58.593041 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:25:59 crc kubenswrapper[4778]: I0312 13:25:59.095757 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swx4c"] Mar 12 13:25:59 crc kubenswrapper[4778]: I0312 13:25:59.432772 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swx4c" event={"ID":"19812411-eae6-4792-9f00-64a6604924fb","Type":"ContainerStarted","Data":"4b8134073d8cbea56729606da6c6cdc39daa6d2a79458fae1697529ee25bbeab"} Mar 12 13:25:59 crc kubenswrapper[4778]: I0312 13:25:59.433106 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swx4c" event={"ID":"19812411-eae6-4792-9f00-64a6604924fb","Type":"ContainerStarted","Data":"044996b50b943738de55c5f921e61dd6fdf3f095e6a1a8170a40c8422279d3e8"} Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.130684 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555366-zt5bk"] Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.131401 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555366-zt5bk" Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.133046 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.133307 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.134443 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.138987 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4bm\" (UniqueName: \"kubernetes.io/projected/d48c598c-314b-4dc6-af90-7772a2ca7f2d-kube-api-access-jp4bm\") pod \"auto-csr-approver-29555366-zt5bk\" (UID: \"d48c598c-314b-4dc6-af90-7772a2ca7f2d\") " pod="openshift-infra/auto-csr-approver-29555366-zt5bk" Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.140617 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555366-zt5bk"] Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.240112 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4bm\" (UniqueName: \"kubernetes.io/projected/d48c598c-314b-4dc6-af90-7772a2ca7f2d-kube-api-access-jp4bm\") pod \"auto-csr-approver-29555366-zt5bk\" (UID: \"d48c598c-314b-4dc6-af90-7772a2ca7f2d\") " pod="openshift-infra/auto-csr-approver-29555366-zt5bk" Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.258461 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4bm\" (UniqueName: \"kubernetes.io/projected/d48c598c-314b-4dc6-af90-7772a2ca7f2d-kube-api-access-jp4bm\") pod \"auto-csr-approver-29555366-zt5bk\" (UID: \"d48c598c-314b-4dc6-af90-7772a2ca7f2d\") " pod="openshift-infra/auto-csr-approver-29555366-zt5bk" Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.441040 4778 generic.go:334] "Generic (PLEG): container finished" podID="19812411-eae6-4792-9f00-64a6604924fb" containerID="4b8134073d8cbea56729606da6c6cdc39daa6d2a79458fae1697529ee25bbeab" exitCode=0 Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.441096 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swx4c" event={"ID":"19812411-eae6-4792-9f00-64a6604924fb","Type":"ContainerDied","Data":"4b8134073d8cbea56729606da6c6cdc39daa6d2a79458fae1697529ee25bbeab"} Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.448298 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555366-zt5bk" Mar 12 13:26:00 crc kubenswrapper[4778]: I0312 13:26:00.683958 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555366-zt5bk"] Mar 12 13:26:01 crc kubenswrapper[4778]: I0312 13:26:01.447996 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555366-zt5bk" event={"ID":"d48c598c-314b-4dc6-af90-7772a2ca7f2d","Type":"ContainerStarted","Data":"58b147806021f77dda3b41c28fbcd5bfb2f19e4aba276ad811e3cc8c42657ee6"} Mar 12 13:26:03 crc kubenswrapper[4778]: I0312 13:26:03.464287 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swx4c" event={"ID":"19812411-eae6-4792-9f00-64a6604924fb","Type":"ContainerStarted","Data":"c1bbf657f8b684a593ddef32aa60fde008a777e8a061554e91e9a2293d3a0832"} Mar 12 13:26:03 crc kubenswrapper[4778]: I0312 13:26:03.466816 4778 generic.go:334] "Generic (PLEG): container finished" podID="d48c598c-314b-4dc6-af90-7772a2ca7f2d" containerID="59816c72d24ee82ad1e212a580fdeb3c8cd671c1f79b421c31d995678ebec873" exitCode=0 Mar 12 13:26:03 crc kubenswrapper[4778]: I0312 13:26:03.466858 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555366-zt5bk" event={"ID":"d48c598c-314b-4dc6-af90-7772a2ca7f2d","Type":"ContainerDied","Data":"59816c72d24ee82ad1e212a580fdeb3c8cd671c1f79b421c31d995678ebec873"} Mar 12 13:26:04 crc kubenswrapper[4778]: I0312 13:26:04.474671 4778 generic.go:334] "Generic (PLEG): container finished" podID="19812411-eae6-4792-9f00-64a6604924fb" containerID="c1bbf657f8b684a593ddef32aa60fde008a777e8a061554e91e9a2293d3a0832" exitCode=0 Mar 12 13:26:04 crc kubenswrapper[4778]: I0312 13:26:04.474964 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swx4c" event={"ID":"19812411-eae6-4792-9f00-64a6604924fb","Type":"ContainerDied","Data":"c1bbf657f8b684a593ddef32aa60fde008a777e8a061554e91e9a2293d3a0832"} Mar 12 13:26:04 crc kubenswrapper[4778]: I0312 13:26:04.766034 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555366-zt5bk" Mar 12 13:26:04 crc kubenswrapper[4778]: I0312 13:26:04.814090 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp4bm\" (UniqueName: \"kubernetes.io/projected/d48c598c-314b-4dc6-af90-7772a2ca7f2d-kube-api-access-jp4bm\") pod \"d48c598c-314b-4dc6-af90-7772a2ca7f2d\" (UID: \"d48c598c-314b-4dc6-af90-7772a2ca7f2d\") " Mar 12 13:26:04 crc kubenswrapper[4778]: I0312 13:26:04.824740 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48c598c-314b-4dc6-af90-7772a2ca7f2d-kube-api-access-jp4bm" (OuterVolumeSpecName: "kube-api-access-jp4bm") pod "d48c598c-314b-4dc6-af90-7772a2ca7f2d" (UID: "d48c598c-314b-4dc6-af90-7772a2ca7f2d"). InnerVolumeSpecName "kube-api-access-jp4bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:26:04 crc kubenswrapper[4778]: I0312 13:26:04.915645 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp4bm\" (UniqueName: \"kubernetes.io/projected/d48c598c-314b-4dc6-af90-7772a2ca7f2d-kube-api-access-jp4bm\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:05 crc kubenswrapper[4778]: I0312 13:26:05.483717 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555366-zt5bk" event={"ID":"d48c598c-314b-4dc6-af90-7772a2ca7f2d","Type":"ContainerDied","Data":"58b147806021f77dda3b41c28fbcd5bfb2f19e4aba276ad811e3cc8c42657ee6"} Mar 12 13:26:05 crc kubenswrapper[4778]: I0312 13:26:05.483761 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58b147806021f77dda3b41c28fbcd5bfb2f19e4aba276ad811e3cc8c42657ee6" Mar 12 13:26:05 crc kubenswrapper[4778]: I0312 13:26:05.483836 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555366-zt5bk" Mar 12 13:26:05 crc kubenswrapper[4778]: I0312 13:26:05.834204 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555360-vwflx"] Mar 12 13:26:05 crc kubenswrapper[4778]: I0312 13:26:05.838848 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555360-vwflx"] Mar 12 13:26:06 crc kubenswrapper[4778]: I0312 13:26:06.261448 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c617404-7840-495c-80da-593af33f77d6" path="/var/lib/kubelet/pods/4c617404-7840-495c-80da-593af33f77d6/volumes" Mar 12 13:26:06 crc kubenswrapper[4778]: I0312 13:26:06.492499 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swx4c" event={"ID":"19812411-eae6-4792-9f00-64a6604924fb","Type":"ContainerStarted","Data":"6bbf817c355785c136024fac46b4f2a46cd308ddb02301de6e0f2fb81b7ff9b1"} Mar 12 13:26:06 crc kubenswrapper[4778]: I0312 13:26:06.517160 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-swx4c" podStartSLOduration=2.65187224 podStartE2EDuration="8.517137619s" podCreationTimestamp="2026-03-12 13:25:58 +0000 UTC" firstStartedPulling="2026-03-12 13:26:00.443721841 +0000 UTC m=+978.892417237" lastFinishedPulling="2026-03-12 13:26:06.3089871 +0000 UTC m=+984.757682616" observedRunningTime="2026-03-12 13:26:06.513611578 +0000 UTC m=+984.962306984" watchObservedRunningTime="2026-03-12 13:26:06.517137619 +0000 UTC m=+984.965833015" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.052905 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-54d5c4b6c7-gh4lx" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.495942 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5jsf9"] Mar 12 13:26:07 crc kubenswrapper[4778]: E0312 13:26:07.496703 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48c598c-314b-4dc6-af90-7772a2ca7f2d" containerName="oc" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.496726 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48c598c-314b-4dc6-af90-7772a2ca7f2d" containerName="oc" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.496899 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48c598c-314b-4dc6-af90-7772a2ca7f2d" containerName="oc" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.497905 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.506241 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jsf9"] Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.574371 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-utilities\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.574423 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-catalog-content\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.574459 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfw95\" (UniqueName: \"kubernetes.io/projected/631910e5-eefd-4ccf-adde-4609f7825e27-kube-api-access-qfw95\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.675779 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfw95\" (UniqueName: \"kubernetes.io/projected/631910e5-eefd-4ccf-adde-4609f7825e27-kube-api-access-qfw95\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.675884 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-utilities\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.675904 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-catalog-content\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.676375 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-catalog-content\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.676797 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-utilities\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.699786 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfw95\" (UniqueName: \"kubernetes.io/projected/631910e5-eefd-4ccf-adde-4609f7825e27-kube-api-access-qfw95\") pod \"certified-operators-5jsf9\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.854654 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.918745 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f"] Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.919513 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.923981 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-r8pt2" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.924162 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.953957 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zxv5p"] Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.956788 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.959425 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.964008 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.967624 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f"] Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.997740 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k7nvk"] Mar 12 13:26:07 crc kubenswrapper[4778]: I0312 13:26:07.998810 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.007099 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.007284 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.007462 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.007538 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-d5gxv" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.009394 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-mnjql"] Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.010730 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.012128 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.055834 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-mnjql"] Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056399 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-conf\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056426 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-sockets\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056455 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-startup\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056500 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056520 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-reloader\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056544 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9ns2\" (UniqueName: \"kubernetes.io/projected/2f214887-d638-42fa-aa86-1518cfae600d-kube-api-access-q9ns2\") pod \"frr-k8s-webhook-server-bcc4b6f68-x2n7f\" (UID: \"2f214887-d638-42fa-aa86-1518cfae600d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056565 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w8tl\" (UniqueName: \"kubernetes.io/projected/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-kube-api-access-2w8tl\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056608 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics-certs\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.056634 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f214887-d638-42fa-aa86-1518cfae600d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-x2n7f\" (UID: \"2f214887-d638-42fa-aa86-1518cfae600d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157425 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9ns2\" (UniqueName: \"kubernetes.io/projected/2f214887-d638-42fa-aa86-1518cfae600d-kube-api-access-q9ns2\") pod \"frr-k8s-webhook-server-bcc4b6f68-x2n7f\" (UID: \"2f214887-d638-42fa-aa86-1518cfae600d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157488 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w8tl\" (UniqueName: \"kubernetes.io/projected/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-kube-api-access-2w8tl\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157521 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics-certs\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157547 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metrics-certs\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157582 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f214887-d638-42fa-aa86-1518cfae600d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-x2n7f\" (UID: \"2f214887-d638-42fa-aa86-1518cfae600d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157620 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157647 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-conf\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157669 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14351deb-3286-4464-8eac-6bb116a9ebce-metrics-certs\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157690 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-sockets\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157726 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-startup\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157760 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9tqj\" (UniqueName: \"kubernetes.io/projected/14351deb-3286-4464-8eac-6bb116a9ebce-kube-api-access-c9tqj\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157809 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzjv2\" (UniqueName: \"kubernetes.io/projected/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-kube-api-access-kzjv2\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157837 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157858 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-reloader\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157882 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metallb-excludel2\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.157903 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14351deb-3286-4464-8eac-6bb116a9ebce-cert\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: E0312 13:26:08.158399 4778 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 12 13:26:08 crc kubenswrapper[4778]: E0312 13:26:08.158445 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics-certs podName:b5f035ed-2e64-4000-908f-6d0ecab1fe8d nodeName:}" failed. No retries permitted until 2026-03-12 13:26:08.658427656 +0000 UTC m=+987.107123052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics-certs") pod "frr-k8s-zxv5p" (UID: "b5f035ed-2e64-4000-908f-6d0ecab1fe8d") : secret "frr-k8s-certs-secret" not found Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.158942 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-sockets\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.159735 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-startup\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.160069 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.160345 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-reloader\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.160530 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-frr-conf\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.166884 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2f214887-d638-42fa-aa86-1518cfae600d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-x2n7f\" (UID: \"2f214887-d638-42fa-aa86-1518cfae600d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.177773 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9ns2\" (UniqueName: \"kubernetes.io/projected/2f214887-d638-42fa-aa86-1518cfae600d-kube-api-access-q9ns2\") pod \"frr-k8s-webhook-server-bcc4b6f68-x2n7f\" (UID: \"2f214887-d638-42fa-aa86-1518cfae600d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.189264 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w8tl\" (UniqueName: \"kubernetes.io/projected/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-kube-api-access-2w8tl\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.246633 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.260984 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzjv2\" (UniqueName: \"kubernetes.io/projected/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-kube-api-access-kzjv2\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.261040 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metallb-excludel2\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.261063 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14351deb-3286-4464-8eac-6bb116a9ebce-cert\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.261121 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metrics-certs\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.261150 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.261192 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14351deb-3286-4464-8eac-6bb116a9ebce-metrics-certs\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.261248 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9tqj\" (UniqueName: \"kubernetes.io/projected/14351deb-3286-4464-8eac-6bb116a9ebce-kube-api-access-c9tqj\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: E0312 13:26:08.262824 4778 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 13:26:08 crc kubenswrapper[4778]: E0312 13:26:08.262834 4778 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 12 13:26:08 crc kubenswrapper[4778]: E0312 13:26:08.262867 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist podName:f2e1d11e-8f27-498d-8d45-ac0e14a796fe nodeName:}" failed. No retries permitted until 2026-03-12 13:26:08.762853785 +0000 UTC m=+987.211549181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist") pod "speaker-k7nvk" (UID: "f2e1d11e-8f27-498d-8d45-ac0e14a796fe") : secret "metallb-memberlist" not found Mar 12 13:26:08 crc kubenswrapper[4778]: E0312 13:26:08.262928 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metrics-certs podName:f2e1d11e-8f27-498d-8d45-ac0e14a796fe nodeName:}" failed. No retries permitted until 2026-03-12 13:26:08.762904937 +0000 UTC m=+987.211600403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metrics-certs") pod "speaker-k7nvk" (UID: "f2e1d11e-8f27-498d-8d45-ac0e14a796fe") : secret "speaker-certs-secret" not found Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.263995 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metallb-excludel2\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.276711 4778 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.277649 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14351deb-3286-4464-8eac-6bb116a9ebce-metrics-certs\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.304708 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzjv2\" (UniqueName: \"kubernetes.io/projected/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-kube-api-access-kzjv2\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.306555 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/14351deb-3286-4464-8eac-6bb116a9ebce-cert\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.311916 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9tqj\" (UniqueName: \"kubernetes.io/projected/14351deb-3286-4464-8eac-6bb116a9ebce-kube-api-access-c9tqj\") pod \"controller-7bb4cc7c98-mnjql\" (UID: \"14351deb-3286-4464-8eac-6bb116a9ebce\") " pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.337512 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.380420 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5jsf9"] Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.594017 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.594064 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.668486 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics-certs\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.673423 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5f035ed-2e64-4000-908f-6d0ecab1fe8d-metrics-certs\") pod \"frr-k8s-zxv5p\" (UID: \"b5f035ed-2e64-4000-908f-6d0ecab1fe8d\") " pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.679607 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jsf9" event={"ID":"631910e5-eefd-4ccf-adde-4609f7825e27","Type":"ContainerStarted","Data":"62dfee05353fcfd0dfc997669714bab327d0be23b5ebdac66ccce3caa2514ce0"} Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.783732 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metrics-certs\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.783807 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: E0312 13:26:08.783948 4778 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 12 13:26:08 crc kubenswrapper[4778]: E0312 13:26:08.784001 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist podName:f2e1d11e-8f27-498d-8d45-ac0e14a796fe nodeName:}" failed. No retries permitted until 2026-03-12 13:26:09.783984914 +0000 UTC m=+988.232680310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist") pod "speaker-k7nvk" (UID: "f2e1d11e-8f27-498d-8d45-ac0e14a796fe") : secret "metallb-memberlist" not found Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.805915 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-metrics-certs\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.835500 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f"] Mar 12 13:26:08 crc kubenswrapper[4778]: I0312 13:26:08.883313 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.085658 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-mnjql"] Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.690266 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-mnjql" event={"ID":"14351deb-3286-4464-8eac-6bb116a9ebce","Type":"ContainerStarted","Data":"208f514cd919cc1b600edb4d8ee8454e19627f040e41b48e238f7a46fdcddc04"} Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.690331 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-mnjql" event={"ID":"14351deb-3286-4464-8eac-6bb116a9ebce","Type":"ContainerStarted","Data":"65381005f96f5204bdfdae7f78e7a3fa233be0b67d02a80bc373154ecb874d3d"} Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.690344 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-mnjql" event={"ID":"14351deb-3286-4464-8eac-6bb116a9ebce","Type":"ContainerStarted","Data":"5032f44852cb1aa63127bd5fc1273012f6f299fadd0f65320f81ac5a2288de17"} Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.690637 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.694239 4778 generic.go:334] "Generic (PLEG): container finished" podID="631910e5-eefd-4ccf-adde-4609f7825e27" containerID="2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668" exitCode=0 Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.694326 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jsf9" event={"ID":"631910e5-eefd-4ccf-adde-4609f7825e27","Type":"ContainerDied","Data":"2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668"} Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.696889 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" event={"ID":"2f214887-d638-42fa-aa86-1518cfae600d","Type":"ContainerStarted","Data":"fa24e6fcabd1931dbd69e0147a2ed76a5df3ab1f219dd4e2b4b1a4991f832bdc"} Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.711757 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerStarted","Data":"0e9d2b811f145ddb3e597854983500d583dee7155a4d2d7675423529527972d6"} Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.719514 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-mnjql" podStartSLOduration=2.719491313 podStartE2EDuration="2.719491313s" podCreationTimestamp="2026-03-12 13:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:26:09.717697192 +0000 UTC m=+988.166392598" watchObservedRunningTime="2026-03-12 13:26:09.719491313 +0000 UTC m=+988.168186709" Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.748013 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-swx4c" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="registry-server" probeResult="failure" output=< Mar 12 13:26:09 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:26:09 crc kubenswrapper[4778]: > Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.799774 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.813802 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f2e1d11e-8f27-498d-8d45-ac0e14a796fe-memberlist\") pod \"speaker-k7nvk\" (UID: \"f2e1d11e-8f27-498d-8d45-ac0e14a796fe\") " pod="metallb-system/speaker-k7nvk" Mar 12 13:26:09 crc kubenswrapper[4778]: I0312 13:26:09.824751 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k7nvk" Mar 12 13:26:09 crc kubenswrapper[4778]: W0312 13:26:09.848306 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2e1d11e_8f27_498d_8d45_ac0e14a796fe.slice/crio-12888157619e57ff85ec07c59ba39825a38220466903e20336f50d93c86e2f8c WatchSource:0}: Error finding container 12888157619e57ff85ec07c59ba39825a38220466903e20336f50d93c86e2f8c: Status 404 returned error can't find the container with id 12888157619e57ff85ec07c59ba39825a38220466903e20336f50d93c86e2f8c Mar 12 13:26:10 crc kubenswrapper[4778]: I0312 13:26:10.724795 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k7nvk" event={"ID":"f2e1d11e-8f27-498d-8d45-ac0e14a796fe","Type":"ContainerStarted","Data":"7ca9ae77fb3b5fe15347f655f81545506968ffe60f16f3351a6eae47c6693bd7"} Mar 12 13:26:10 crc kubenswrapper[4778]: I0312 13:26:10.725260 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k7nvk" event={"ID":"f2e1d11e-8f27-498d-8d45-ac0e14a796fe","Type":"ContainerStarted","Data":"3826db80e29ade0a6f881d928d89599e164f8e6265d83b75f8c9c37cdf911faa"} Mar 12 13:26:10 crc kubenswrapper[4778]: I0312 13:26:10.725274 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k7nvk" event={"ID":"f2e1d11e-8f27-498d-8d45-ac0e14a796fe","Type":"ContainerStarted","Data":"12888157619e57ff85ec07c59ba39825a38220466903e20336f50d93c86e2f8c"} Mar 12 13:26:10 crc kubenswrapper[4778]: I0312 13:26:10.725918 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k7nvk" Mar 12 13:26:10 crc kubenswrapper[4778]: I0312 13:26:10.754911 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k7nvk" podStartSLOduration=3.754896134 podStartE2EDuration="3.754896134s" podCreationTimestamp="2026-03-12 13:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:26:10.753993699 +0000 UTC m=+989.202689095" watchObservedRunningTime="2026-03-12 13:26:10.754896134 +0000 UTC m=+989.203591530" Mar 12 13:26:11 crc kubenswrapper[4778]: I0312 13:26:11.905824 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jsf9" event={"ID":"631910e5-eefd-4ccf-adde-4609f7825e27","Type":"ContainerStarted","Data":"21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96"} Mar 12 13:26:13 crc kubenswrapper[4778]: E0312 13:26:13.478533 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631910e5_eefd_4ccf_adde_4609f7825e27.slice/crio-conmon-21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96.scope\": RecentStats: unable to find data in memory cache]" Mar 12 13:26:13 crc kubenswrapper[4778]: I0312 13:26:13.925740 4778 generic.go:334] "Generic (PLEG): container finished" podID="631910e5-eefd-4ccf-adde-4609f7825e27" containerID="21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96" exitCode=0 Mar 12 13:26:13 crc kubenswrapper[4778]: I0312 13:26:13.925787 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jsf9" event={"ID":"631910e5-eefd-4ccf-adde-4609f7825e27","Type":"ContainerDied","Data":"21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96"} Mar 12 13:26:15 crc kubenswrapper[4778]: I0312 13:26:15.225796 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jsf9" event={"ID":"631910e5-eefd-4ccf-adde-4609f7825e27","Type":"ContainerStarted","Data":"cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135"} Mar 12 13:26:15 crc kubenswrapper[4778]: I0312 13:26:15.249440 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5jsf9" podStartSLOduration=3.213368395 podStartE2EDuration="8.249406626s" podCreationTimestamp="2026-03-12 13:26:07 +0000 UTC" firstStartedPulling="2026-03-12 13:26:09.69588754 +0000 UTC m=+988.144582956" lastFinishedPulling="2026-03-12 13:26:14.731925801 +0000 UTC m=+993.180621187" observedRunningTime="2026-03-12 13:26:15.248567232 +0000 UTC m=+993.697262658" watchObservedRunningTime="2026-03-12 13:26:15.249406626 +0000 UTC m=+993.698102022" Mar 12 13:26:17 crc kubenswrapper[4778]: I0312 13:26:17.855781 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:17 crc kubenswrapper[4778]: I0312 13:26:17.856691 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:18 crc kubenswrapper[4778]: I0312 13:26:18.683719 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:26:18 crc kubenswrapper[4778]: I0312 13:26:18.742038 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:26:18 crc kubenswrapper[4778]: I0312 13:26:18.930865 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-swx4c"] Mar 12 13:26:19 crc kubenswrapper[4778]: I0312 13:26:19.012604 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5jsf9" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="registry-server" probeResult="failure" output=< Mar 12 13:26:19 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:26:19 crc kubenswrapper[4778]: > Mar 12 13:26:20 crc kubenswrapper[4778]: I0312 13:26:20.384237 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-swx4c" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="registry-server" containerID="cri-o://6bbf817c355785c136024fac46b4f2a46cd308ddb02301de6e0f2fb81b7ff9b1" gracePeriod=2 Mar 12 13:26:21 crc kubenswrapper[4778]: I0312 13:26:21.393094 4778 generic.go:334] "Generic (PLEG): container finished" podID="19812411-eae6-4792-9f00-64a6604924fb" containerID="6bbf817c355785c136024fac46b4f2a46cd308ddb02301de6e0f2fb81b7ff9b1" exitCode=0 Mar 12 13:26:21 crc kubenswrapper[4778]: I0312 13:26:21.393131 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swx4c" event={"ID":"19812411-eae6-4792-9f00-64a6604924fb","Type":"ContainerDied","Data":"6bbf817c355785c136024fac46b4f2a46cd308ddb02301de6e0f2fb81b7ff9b1"} Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.799296 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.816866 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-catalog-content\") pod \"19812411-eae6-4792-9f00-64a6604924fb\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.816973 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-utilities\") pod \"19812411-eae6-4792-9f00-64a6604924fb\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.817007 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n57qk\" (UniqueName: \"kubernetes.io/projected/19812411-eae6-4792-9f00-64a6604924fb-kube-api-access-n57qk\") pod \"19812411-eae6-4792-9f00-64a6604924fb\" (UID: \"19812411-eae6-4792-9f00-64a6604924fb\") " Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.821334 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-utilities" (OuterVolumeSpecName: "utilities") pod "19812411-eae6-4792-9f00-64a6604924fb" (UID: "19812411-eae6-4792-9f00-64a6604924fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.831296 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19812411-eae6-4792-9f00-64a6604924fb-kube-api-access-n57qk" (OuterVolumeSpecName: "kube-api-access-n57qk") pod "19812411-eae6-4792-9f00-64a6604924fb" (UID: "19812411-eae6-4792-9f00-64a6604924fb"). InnerVolumeSpecName "kube-api-access-n57qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.890744 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19812411-eae6-4792-9f00-64a6604924fb" (UID: "19812411-eae6-4792-9f00-64a6604924fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.920159 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.920822 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19812411-eae6-4792-9f00-64a6604924fb-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:22 crc kubenswrapper[4778]: I0312 13:26:22.920834 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n57qk\" (UniqueName: \"kubernetes.io/projected/19812411-eae6-4792-9f00-64a6604924fb-kube-api-access-n57qk\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.407762 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" event={"ID":"2f214887-d638-42fa-aa86-1518cfae600d","Type":"ContainerStarted","Data":"2c4c6771d80567c3899e80a38ef6f909606c5c92cc88ac51b59fc3e170d8c825"} Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.408172 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.409242 4778 generic.go:334] "Generic (PLEG): container finished" podID="b5f035ed-2e64-4000-908f-6d0ecab1fe8d" containerID="4c6923df2cdecc82edcf2736d86f11607a489acc252c64dfbe59ae1943318eb0" exitCode=0 Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.409317 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerDied","Data":"4c6923df2cdecc82edcf2736d86f11607a489acc252c64dfbe59ae1943318eb0"} Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.411592 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swx4c" event={"ID":"19812411-eae6-4792-9f00-64a6604924fb","Type":"ContainerDied","Data":"044996b50b943738de55c5f921e61dd6fdf3f095e6a1a8170a40c8422279d3e8"} Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.411649 4778 scope.go:117] "RemoveContainer" containerID="6bbf817c355785c136024fac46b4f2a46cd308ddb02301de6e0f2fb81b7ff9b1" Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.411661 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swx4c" Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.431919 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" podStartSLOduration=2.4764062989999998 podStartE2EDuration="16.431898598s" podCreationTimestamp="2026-03-12 13:26:07 +0000 UTC" firstStartedPulling="2026-03-12 13:26:08.889473283 +0000 UTC m=+987.338168679" lastFinishedPulling="2026-03-12 13:26:22.844965582 +0000 UTC m=+1001.293660978" observedRunningTime="2026-03-12 13:26:23.430307742 +0000 UTC m=+1001.879003158" watchObservedRunningTime="2026-03-12 13:26:23.431898598 +0000 UTC m=+1001.880594004" Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.488301 4778 scope.go:117] "RemoveContainer" containerID="c1bbf657f8b684a593ddef32aa60fde008a777e8a061554e91e9a2293d3a0832" Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.527338 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-swx4c"] Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.528067 4778 scope.go:117] "RemoveContainer" containerID="4b8134073d8cbea56729606da6c6cdc39daa6d2a79458fae1697529ee25bbeab" Mar 12 13:26:23 crc kubenswrapper[4778]: I0312 13:26:23.533277 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-swx4c"] Mar 12 13:26:23 crc kubenswrapper[4778]: E0312 13:26:23.650651 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19812411_eae6_4792_9f00_64a6604924fb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19812411_eae6_4792_9f00_64a6604924fb.slice/crio-044996b50b943738de55c5f921e61dd6fdf3f095e6a1a8170a40c8422279d3e8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5f035ed_2e64_4000_908f_6d0ecab1fe8d.slice/crio-conmon-5f944b02e5c4fd2938ea1f2eed17a7cf148a1ac57c7cf02afb4110bb86aa032a.scope\": RecentStats: unable to find data in memory cache]" Mar 12 13:26:24 crc kubenswrapper[4778]: I0312 13:26:24.268102 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19812411-eae6-4792-9f00-64a6604924fb" path="/var/lib/kubelet/pods/19812411-eae6-4792-9f00-64a6604924fb/volumes" Mar 12 13:26:24 crc kubenswrapper[4778]: I0312 13:26:24.423508 4778 generic.go:334] "Generic (PLEG): container finished" podID="b5f035ed-2e64-4000-908f-6d0ecab1fe8d" containerID="5f944b02e5c4fd2938ea1f2eed17a7cf148a1ac57c7cf02afb4110bb86aa032a" exitCode=0 Mar 12 13:26:24 crc kubenswrapper[4778]: I0312 13:26:24.423560 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerDied","Data":"5f944b02e5c4fd2938ea1f2eed17a7cf148a1ac57c7cf02afb4110bb86aa032a"} Mar 12 13:26:25 crc kubenswrapper[4778]: I0312 13:26:25.433769 4778 generic.go:334] "Generic (PLEG): container finished" podID="b5f035ed-2e64-4000-908f-6d0ecab1fe8d" containerID="d4b23df880c826685e70ef1ea6f346e4940a8db232ba57d0a989bdee1e47c504" exitCode=0 Mar 12 13:26:25 crc kubenswrapper[4778]: I0312 13:26:25.433871 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerDied","Data":"d4b23df880c826685e70ef1ea6f346e4940a8db232ba57d0a989bdee1e47c504"} Mar 12 13:26:26 crc kubenswrapper[4778]: I0312 13:26:26.443903 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerStarted","Data":"69d841216da693033b93967bd9f96f5622fe3a6eaaca22f4423b12fc72860396"} Mar 12 13:26:26 crc kubenswrapper[4778]: I0312 13:26:26.443944 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerStarted","Data":"b8c78d004bac3227097fce8d27ac831d2de1d86df94ffb03d839bc1cf5623403"} Mar 12 13:26:26 crc kubenswrapper[4778]: I0312 13:26:26.443954 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerStarted","Data":"7cb1446a7230327ccf23a0d317d5a79825f8d6e0075315bff24964579f60572e"} Mar 12 13:26:26 crc kubenswrapper[4778]: I0312 13:26:26.443964 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerStarted","Data":"36f2dfd7ce0a51b4cd99ee0a1580c67be8228e9de0375a5e2096190edb364201"} Mar 12 13:26:26 crc kubenswrapper[4778]: I0312 13:26:26.443972 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerStarted","Data":"61a03e1c5b6356ee74a455f3f4a569a67017490c8228f33d9eecbb641c5896fe"} Mar 12 13:26:27 crc kubenswrapper[4778]: I0312 13:26:27.454482 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zxv5p" event={"ID":"b5f035ed-2e64-4000-908f-6d0ecab1fe8d","Type":"ContainerStarted","Data":"8625916bc5f9b7f34622f5f38a1d5e954161f0390eea62087f4ed0fea0b61d44"} Mar 12 13:26:27 crc kubenswrapper[4778]: I0312 13:26:27.455116 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:27 crc kubenswrapper[4778]: I0312 13:26:27.480247 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zxv5p" podStartSLOduration=6.738395297 podStartE2EDuration="20.480228699s" podCreationTimestamp="2026-03-12 13:26:07 +0000 UTC" firstStartedPulling="2026-03-12 13:26:09.069637534 +0000 UTC m=+987.518332930" lastFinishedPulling="2026-03-12 13:26:22.811470936 +0000 UTC m=+1001.260166332" observedRunningTime="2026-03-12 13:26:27.476559345 +0000 UTC m=+1005.925254751" watchObservedRunningTime="2026-03-12 13:26:27.480228699 +0000 UTC m=+1005.928924105" Mar 12 13:26:27 crc kubenswrapper[4778]: I0312 13:26:27.919098 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:27 crc kubenswrapper[4778]: I0312 13:26:27.968166 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.154205 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jsf9"] Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.342123 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-mnjql" Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.558313 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.558406 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.558467 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.559363 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b65e287d42eea6146877a35b0789c26ac0ef9f5d251a760b59f08b3fef055d65"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.559455 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://b65e287d42eea6146877a35b0789c26ac0ef9f5d251a760b59f08b3fef055d65" gracePeriod=600 Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.884223 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:28 crc kubenswrapper[4778]: I0312 13:26:28.928404 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:29 crc kubenswrapper[4778]: I0312 13:26:29.467679 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="b65e287d42eea6146877a35b0789c26ac0ef9f5d251a760b59f08b3fef055d65" exitCode=0 Mar 12 13:26:29 crc kubenswrapper[4778]: I0312 13:26:29.467765 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"b65e287d42eea6146877a35b0789c26ac0ef9f5d251a760b59f08b3fef055d65"} Mar 12 13:26:29 crc kubenswrapper[4778]: I0312 13:26:29.467810 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"3b4b372cac8f288fc2585670d5ab7c00c41331f173130d39b164aa74e4e3e398"} Mar 12 13:26:29 crc kubenswrapper[4778]: I0312 13:26:29.467826 4778 scope.go:117] "RemoveContainer" containerID="dfcc37339849724c4aacca3262255dd43897a2284c2172380a90cc97f52e3a46" Mar 12 13:26:29 crc kubenswrapper[4778]: I0312 13:26:29.468059 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5jsf9" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="registry-server" containerID="cri-o://cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135" gracePeriod=2 Mar 12 13:26:29 crc kubenswrapper[4778]: I0312 13:26:29.828626 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k7nvk" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.356378 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.436620 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-utilities\") pod \"631910e5-eefd-4ccf-adde-4609f7825e27\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.437142 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfw95\" (UniqueName: \"kubernetes.io/projected/631910e5-eefd-4ccf-adde-4609f7825e27-kube-api-access-qfw95\") pod \"631910e5-eefd-4ccf-adde-4609f7825e27\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.437191 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-catalog-content\") pod \"631910e5-eefd-4ccf-adde-4609f7825e27\" (UID: \"631910e5-eefd-4ccf-adde-4609f7825e27\") " Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.437953 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-utilities" (OuterVolumeSpecName: "utilities") pod "631910e5-eefd-4ccf-adde-4609f7825e27" (UID: "631910e5-eefd-4ccf-adde-4609f7825e27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.444480 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/631910e5-eefd-4ccf-adde-4609f7825e27-kube-api-access-qfw95" (OuterVolumeSpecName: "kube-api-access-qfw95") pod "631910e5-eefd-4ccf-adde-4609f7825e27" (UID: "631910e5-eefd-4ccf-adde-4609f7825e27"). InnerVolumeSpecName "kube-api-access-qfw95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.476260 4778 generic.go:334] "Generic (PLEG): container finished" podID="631910e5-eefd-4ccf-adde-4609f7825e27" containerID="cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135" exitCode=0 Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.476325 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5jsf9" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.476342 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jsf9" event={"ID":"631910e5-eefd-4ccf-adde-4609f7825e27","Type":"ContainerDied","Data":"cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135"} Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.476380 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5jsf9" event={"ID":"631910e5-eefd-4ccf-adde-4609f7825e27","Type":"ContainerDied","Data":"62dfee05353fcfd0dfc997669714bab327d0be23b5ebdac66ccce3caa2514ce0"} Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.476402 4778 scope.go:117] "RemoveContainer" containerID="cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.497952 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "631910e5-eefd-4ccf-adde-4609f7825e27" (UID: "631910e5-eefd-4ccf-adde-4609f7825e27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.502181 4778 scope.go:117] "RemoveContainer" containerID="21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.517033 4778 scope.go:117] "RemoveContainer" containerID="2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.538261 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.538287 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfw95\" (UniqueName: \"kubernetes.io/projected/631910e5-eefd-4ccf-adde-4609f7825e27-kube-api-access-qfw95\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.538298 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/631910e5-eefd-4ccf-adde-4609f7825e27-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.538694 4778 scope.go:117] "RemoveContainer" containerID="cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135" Mar 12 13:26:30 crc kubenswrapper[4778]: E0312 13:26:30.539723 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135\": container with ID starting with cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135 not found: ID does not exist" containerID="cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.539755 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135"} err="failed to get container status \"cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135\": rpc error: code = NotFound desc = could not find container \"cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135\": container with ID starting with cbb6037b8cc080048e6578c32ace5294078d9b4d705e389bb202aad65ec4b135 not found: ID does not exist" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.539778 4778 scope.go:117] "RemoveContainer" containerID="21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96" Mar 12 13:26:30 crc kubenswrapper[4778]: E0312 13:26:30.540360 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96\": container with ID starting with 21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96 not found: ID does not exist" containerID="21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.540433 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96"} err="failed to get container status \"21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96\": rpc error: code = NotFound desc = could not find container \"21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96\": container with ID starting with 21b20ff1eb5f0382d62c5bf4557691c60f975b3c3036f939f288f19834d29b96 not found: ID does not exist" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.540536 4778 scope.go:117] "RemoveContainer" containerID="2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668" Mar 12 13:26:30 crc kubenswrapper[4778]: E0312 13:26:30.542713 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668\": container with ID starting with 2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668 not found: ID does not exist" containerID="2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.542747 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668"} err="failed to get container status \"2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668\": rpc error: code = NotFound desc = could not find container \"2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668\": container with ID starting with 2379ded140d6beb0318ea43c0d651b20477999f6fc28a1039221cf9aa630c668 not found: ID does not exist" Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.809363 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5jsf9"] Mar 12 13:26:30 crc kubenswrapper[4778]: I0312 13:26:30.816229 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5jsf9"] Mar 12 13:26:32 crc kubenswrapper[4778]: I0312 13:26:32.260535 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" path="/var/lib/kubelet/pods/631910e5-eefd-4ccf-adde-4609f7825e27/volumes" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561088 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tbh2n"] Mar 12 13:26:34 crc kubenswrapper[4778]: E0312 13:26:34.561559 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="extract-utilities" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561570 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="extract-utilities" Mar 12 13:26:34 crc kubenswrapper[4778]: E0312 13:26:34.561579 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="registry-server" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561586 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="registry-server" Mar 12 13:26:34 crc kubenswrapper[4778]: E0312 13:26:34.561597 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="extract-content" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561603 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="extract-content" Mar 12 13:26:34 crc kubenswrapper[4778]: E0312 13:26:34.561614 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="extract-content" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561619 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="extract-content" Mar 12 13:26:34 crc kubenswrapper[4778]: E0312 13:26:34.561630 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="extract-utilities" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561635 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="extract-utilities" Mar 12 13:26:34 crc kubenswrapper[4778]: E0312 13:26:34.561642 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="registry-server" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561647 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="registry-server" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561756 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="19812411-eae6-4792-9f00-64a6604924fb" containerName="registry-server" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.561774 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="631910e5-eefd-4ccf-adde-4609f7825e27" containerName="registry-server" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.562118 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tbh2n" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.563963 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-dl4ht" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.564618 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.565242 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.575963 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tbh2n"] Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.583164 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvdhc\" (UniqueName: \"kubernetes.io/projected/60754672-3d3c-4763-8078-356d0a0167ac-kube-api-access-gvdhc\") pod \"openstack-operator-index-tbh2n\" (UID: \"60754672-3d3c-4763-8078-356d0a0167ac\") " pod="openstack-operators/openstack-operator-index-tbh2n" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.684493 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvdhc\" (UniqueName: \"kubernetes.io/projected/60754672-3d3c-4763-8078-356d0a0167ac-kube-api-access-gvdhc\") pod \"openstack-operator-index-tbh2n\" (UID: \"60754672-3d3c-4763-8078-356d0a0167ac\") " pod="openstack-operators/openstack-operator-index-tbh2n" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.708739 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvdhc\" (UniqueName: \"kubernetes.io/projected/60754672-3d3c-4763-8078-356d0a0167ac-kube-api-access-gvdhc\") pod \"openstack-operator-index-tbh2n\" (UID: \"60754672-3d3c-4763-8078-356d0a0167ac\") " pod="openstack-operators/openstack-operator-index-tbh2n" Mar 12 13:26:34 crc kubenswrapper[4778]: I0312 13:26:34.880894 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tbh2n" Mar 12 13:26:35 crc kubenswrapper[4778]: I0312 13:26:35.312636 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tbh2n"] Mar 12 13:26:35 crc kubenswrapper[4778]: I0312 13:26:35.513762 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tbh2n" event={"ID":"60754672-3d3c-4763-8078-356d0a0167ac","Type":"ContainerStarted","Data":"82b0ae343a0a10b9762a77419cf9048e88f45587940a8f410652a383e6167520"} Mar 12 13:26:38 crc kubenswrapper[4778]: I0312 13:26:38.251969 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-x2n7f" Mar 12 13:26:38 crc kubenswrapper[4778]: I0312 13:26:38.537431 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tbh2n" event={"ID":"60754672-3d3c-4763-8078-356d0a0167ac","Type":"ContainerStarted","Data":"b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a"} Mar 12 13:26:38 crc kubenswrapper[4778]: I0312 13:26:38.889004 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zxv5p" Mar 12 13:26:38 crc kubenswrapper[4778]: I0312 13:26:38.925523 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tbh2n" podStartSLOduration=2.789810305 podStartE2EDuration="4.925496408s" podCreationTimestamp="2026-03-12 13:26:34 +0000 UTC" firstStartedPulling="2026-03-12 13:26:35.321596917 +0000 UTC m=+1013.770292313" lastFinishedPulling="2026-03-12 13:26:37.45728301 +0000 UTC m=+1015.905978416" observedRunningTime="2026-03-12 13:26:38.555472041 +0000 UTC m=+1017.004167437" watchObservedRunningTime="2026-03-12 13:26:38.925496408 +0000 UTC m=+1017.374191834" Mar 12 13:26:38 crc kubenswrapper[4778]: I0312 13:26:38.953546 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tbh2n"] Mar 12 13:26:39 crc kubenswrapper[4778]: I0312 13:26:39.566428 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-b2fsv"] Mar 12 13:26:39 crc kubenswrapper[4778]: I0312 13:26:39.567561 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:39 crc kubenswrapper[4778]: I0312 13:26:39.576582 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b2fsv"] Mar 12 13:26:39 crc kubenswrapper[4778]: I0312 13:26:39.652566 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5sj\" (UniqueName: \"kubernetes.io/projected/748546a6-1355-470f-b8d0-de395cf3f681-kube-api-access-tk5sj\") pod \"openstack-operator-index-b2fsv\" (UID: \"748546a6-1355-470f-b8d0-de395cf3f681\") " pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:39 crc kubenswrapper[4778]: I0312 13:26:39.753564 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5sj\" (UniqueName: \"kubernetes.io/projected/748546a6-1355-470f-b8d0-de395cf3f681-kube-api-access-tk5sj\") pod \"openstack-operator-index-b2fsv\" (UID: \"748546a6-1355-470f-b8d0-de395cf3f681\") " pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:39 crc kubenswrapper[4778]: I0312 13:26:39.771412 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5sj\" (UniqueName: \"kubernetes.io/projected/748546a6-1355-470f-b8d0-de395cf3f681-kube-api-access-tk5sj\") pod \"openstack-operator-index-b2fsv\" (UID: \"748546a6-1355-470f-b8d0-de395cf3f681\") " pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:39 crc kubenswrapper[4778]: I0312 13:26:39.886149 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:40 crc kubenswrapper[4778]: I0312 13:26:40.323525 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b2fsv"] Mar 12 13:26:40 crc kubenswrapper[4778]: W0312 13:26:40.334422 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748546a6_1355_470f_b8d0_de395cf3f681.slice/crio-eecc88b48278b2739e6f24ef7a7358771a2419e4c79e671aa9590dbed5f25c75 WatchSource:0}: Error finding container eecc88b48278b2739e6f24ef7a7358771a2419e4c79e671aa9590dbed5f25c75: Status 404 returned error can't find the container with id eecc88b48278b2739e6f24ef7a7358771a2419e4c79e671aa9590dbed5f25c75 Mar 12 13:26:40 crc kubenswrapper[4778]: I0312 13:26:40.551011 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b2fsv" event={"ID":"748546a6-1355-470f-b8d0-de395cf3f681","Type":"ContainerStarted","Data":"eecc88b48278b2739e6f24ef7a7358771a2419e4c79e671aa9590dbed5f25c75"} Mar 12 13:26:40 crc kubenswrapper[4778]: I0312 13:26:40.551149 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-tbh2n" podUID="60754672-3d3c-4763-8078-356d0a0167ac" containerName="registry-server" containerID="cri-o://b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a" gracePeriod=2 Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.003544 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tbh2n" Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.171943 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvdhc\" (UniqueName: \"kubernetes.io/projected/60754672-3d3c-4763-8078-356d0a0167ac-kube-api-access-gvdhc\") pod \"60754672-3d3c-4763-8078-356d0a0167ac\" (UID: \"60754672-3d3c-4763-8078-356d0a0167ac\") " Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.179175 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60754672-3d3c-4763-8078-356d0a0167ac-kube-api-access-gvdhc" (OuterVolumeSpecName: "kube-api-access-gvdhc") pod "60754672-3d3c-4763-8078-356d0a0167ac" (UID: "60754672-3d3c-4763-8078-356d0a0167ac"). InnerVolumeSpecName "kube-api-access-gvdhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.273830 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvdhc\" (UniqueName: \"kubernetes.io/projected/60754672-3d3c-4763-8078-356d0a0167ac-kube-api-access-gvdhc\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.560936 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b2fsv" event={"ID":"748546a6-1355-470f-b8d0-de395cf3f681","Type":"ContainerStarted","Data":"ee999697231a549c59c0cbcfc71d06fb72ac37908ba01d31423bef70468e1a7b"} Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.566269 4778 generic.go:334] "Generic (PLEG): container finished" podID="60754672-3d3c-4763-8078-356d0a0167ac" containerID="b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a" exitCode=0 Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.566389 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tbh2n" event={"ID":"60754672-3d3c-4763-8078-356d0a0167ac","Type":"ContainerDied","Data":"b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a"} Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.566428 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tbh2n" Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.566498 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tbh2n" event={"ID":"60754672-3d3c-4763-8078-356d0a0167ac","Type":"ContainerDied","Data":"82b0ae343a0a10b9762a77419cf9048e88f45587940a8f410652a383e6167520"} Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.566520 4778 scope.go:117] "RemoveContainer" containerID="b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a" Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.590876 4778 scope.go:117] "RemoveContainer" containerID="b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a" Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.591009 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-b2fsv" podStartSLOduration=2.051891466 podStartE2EDuration="2.590974766s" podCreationTimestamp="2026-03-12 13:26:39 +0000 UTC" firstStartedPulling="2026-03-12 13:26:40.339046688 +0000 UTC m=+1018.787742084" lastFinishedPulling="2026-03-12 13:26:40.878129988 +0000 UTC m=+1019.326825384" observedRunningTime="2026-03-12 13:26:41.582094193 +0000 UTC m=+1020.030789599" watchObservedRunningTime="2026-03-12 13:26:41.590974766 +0000 UTC m=+1020.039670192" Mar 12 13:26:41 crc kubenswrapper[4778]: E0312 13:26:41.592048 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a\": container with ID starting with b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a not found: ID does not exist" containerID="b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a" Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.592118 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a"} err="failed to get container status \"b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a\": rpc error: code = NotFound desc = could not find container \"b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a\": container with ID starting with b26be7e9c9aca4fa6af7739dd64d7c833c232756970ba8b12fb55335ec5e133a not found: ID does not exist" Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.615048 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tbh2n"] Mar 12 13:26:41 crc kubenswrapper[4778]: I0312 13:26:41.620746 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-tbh2n"] Mar 12 13:26:42 crc kubenswrapper[4778]: I0312 13:26:42.261711 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60754672-3d3c-4763-8078-356d0a0167ac" path="/var/lib/kubelet/pods/60754672-3d3c-4763-8078-356d0a0167ac/volumes" Mar 12 13:26:47 crc kubenswrapper[4778]: I0312 13:26:47.318505 4778 scope.go:117] "RemoveContainer" containerID="97b3a747ac158c0518500113b5af025bff04e06faaee081df03d1a06860f190f" Mar 12 13:26:49 crc kubenswrapper[4778]: I0312 13:26:49.886558 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:49 crc kubenswrapper[4778]: I0312 13:26:49.887038 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:49 crc kubenswrapper[4778]: I0312 13:26:49.937493 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:50 crc kubenswrapper[4778]: I0312 13:26:50.661301 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-b2fsv" Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.793700 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr"] Mar 12 13:26:51 crc kubenswrapper[4778]: E0312 13:26:51.793968 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60754672-3d3c-4763-8078-356d0a0167ac" containerName="registry-server" Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.793979 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="60754672-3d3c-4763-8078-356d0a0167ac" containerName="registry-server" Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.794080 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="60754672-3d3c-4763-8078-356d0a0167ac" containerName="registry-server" Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.795107 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.797143 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7k2vk" Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.817751 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr"] Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.916964 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2rg\" (UniqueName: \"kubernetes.io/projected/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-kube-api-access-bw2rg\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.917030 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-bundle\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:51 crc kubenswrapper[4778]: I0312 13:26:51.917110 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-util\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.018478 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2rg\" (UniqueName: \"kubernetes.io/projected/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-kube-api-access-bw2rg\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.018593 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-bundle\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.018697 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-util\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.019632 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-bundle\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.019749 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-util\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.042215 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2rg\" (UniqueName: \"kubernetes.io/projected/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-kube-api-access-bw2rg\") pod \"4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.114452 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.331277 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr"] Mar 12 13:26:52 crc kubenswrapper[4778]: I0312 13:26:52.641572 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" event={"ID":"e1d0ffee-229e-4da3-ac89-02bf6f6a439f","Type":"ContainerStarted","Data":"6c7b975bfe62559f47ed672e09903d56d09217ee6bc21f25b381ac2ab6bac8f1"} Mar 12 13:26:53 crc kubenswrapper[4778]: I0312 13:26:53.651704 4778 generic.go:334] "Generic (PLEG): container finished" podID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerID="f725f63e28c02b6398490e5b45fc89031bba591d75ca6d07df4522b25aa997b6" exitCode=0 Mar 12 13:26:53 crc kubenswrapper[4778]: I0312 13:26:53.651766 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" event={"ID":"e1d0ffee-229e-4da3-ac89-02bf6f6a439f","Type":"ContainerDied","Data":"f725f63e28c02b6398490e5b45fc89031bba591d75ca6d07df4522b25aa997b6"} Mar 12 13:26:55 crc kubenswrapper[4778]: I0312 13:26:55.668013 4778 generic.go:334] "Generic (PLEG): container finished" podID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerID="0fbe728cb5ced90554b79488aab511eb56de40f0946cf53e7cb505f322daef57" exitCode=0 Mar 12 13:26:55 crc kubenswrapper[4778]: I0312 13:26:55.668118 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" event={"ID":"e1d0ffee-229e-4da3-ac89-02bf6f6a439f","Type":"ContainerDied","Data":"0fbe728cb5ced90554b79488aab511eb56de40f0946cf53e7cb505f322daef57"} Mar 12 13:26:56 crc kubenswrapper[4778]: I0312 13:26:56.681090 4778 generic.go:334] "Generic (PLEG): container finished" podID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerID="55a5f5156e8222030ee1d0cca1ef9eee2f268a5a3accc79a2f3efdd69b8eb4d2" exitCode=0 Mar 12 13:26:56 crc kubenswrapper[4778]: I0312 13:26:56.681454 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" event={"ID":"e1d0ffee-229e-4da3-ac89-02bf6f6a439f","Type":"ContainerDied","Data":"55a5f5156e8222030ee1d0cca1ef9eee2f268a5a3accc79a2f3efdd69b8eb4d2"} Mar 12 13:26:57 crc kubenswrapper[4778]: I0312 13:26:57.924701 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.099226 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-bundle\") pod \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.099415 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw2rg\" (UniqueName: \"kubernetes.io/projected/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-kube-api-access-bw2rg\") pod \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.099482 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-util\") pod \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\" (UID: \"e1d0ffee-229e-4da3-ac89-02bf6f6a439f\") " Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.100280 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-bundle" (OuterVolumeSpecName: "bundle") pod "e1d0ffee-229e-4da3-ac89-02bf6f6a439f" (UID: "e1d0ffee-229e-4da3-ac89-02bf6f6a439f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.103502 4778 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.117667 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-kube-api-access-bw2rg" (OuterVolumeSpecName: "kube-api-access-bw2rg") pod "e1d0ffee-229e-4da3-ac89-02bf6f6a439f" (UID: "e1d0ffee-229e-4da3-ac89-02bf6f6a439f"). InnerVolumeSpecName "kube-api-access-bw2rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.209335 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw2rg\" (UniqueName: \"kubernetes.io/projected/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-kube-api-access-bw2rg\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.243871 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-util" (OuterVolumeSpecName: "util") pod "e1d0ffee-229e-4da3-ac89-02bf6f6a439f" (UID: "e1d0ffee-229e-4da3-ac89-02bf6f6a439f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.311167 4778 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1d0ffee-229e-4da3-ac89-02bf6f6a439f-util\") on node \"crc\" DevicePath \"\"" Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.696835 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" event={"ID":"e1d0ffee-229e-4da3-ac89-02bf6f6a439f","Type":"ContainerDied","Data":"6c7b975bfe62559f47ed672e09903d56d09217ee6bc21f25b381ac2ab6bac8f1"} Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.697350 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c7b975bfe62559f47ed672e09903d56d09217ee6bc21f25b381ac2ab6bac8f1" Mar 12 13:26:58 crc kubenswrapper[4778]: I0312 13:26:58.696923 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.002836 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl"] Mar 12 13:27:04 crc kubenswrapper[4778]: E0312 13:27:04.003263 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerName="extract" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.003274 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerName="extract" Mar 12 13:27:04 crc kubenswrapper[4778]: E0312 13:27:04.003285 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerName="util" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.003291 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerName="util" Mar 12 13:27:04 crc kubenswrapper[4778]: E0312 13:27:04.003304 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerName="pull" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.003310 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerName="pull" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.003407 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d0ffee-229e-4da3-ac89-02bf6f6a439f" containerName="extract" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.003785 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.006851 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-w82v7" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.039711 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl"] Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.196164 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klpjj\" (UniqueName: \"kubernetes.io/projected/34bbdc16-4518-4ee5-9a70-3cedcc5f0159-kube-api-access-klpjj\") pod \"openstack-operator-controller-init-5bc4df7446-x9bsl\" (UID: \"34bbdc16-4518-4ee5-9a70-3cedcc5f0159\") " pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.297648 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klpjj\" (UniqueName: \"kubernetes.io/projected/34bbdc16-4518-4ee5-9a70-3cedcc5f0159-kube-api-access-klpjj\") pod \"openstack-operator-controller-init-5bc4df7446-x9bsl\" (UID: \"34bbdc16-4518-4ee5-9a70-3cedcc5f0159\") " pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.329122 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klpjj\" (UniqueName: \"kubernetes.io/projected/34bbdc16-4518-4ee5-9a70-3cedcc5f0159-kube-api-access-klpjj\") pod \"openstack-operator-controller-init-5bc4df7446-x9bsl\" (UID: \"34bbdc16-4518-4ee5-9a70-3cedcc5f0159\") " pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" Mar 12 13:27:04 crc kubenswrapper[4778]: I0312 13:27:04.621510 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" Mar 12 13:27:05 crc kubenswrapper[4778]: I0312 13:27:05.037857 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl"] Mar 12 13:27:05 crc kubenswrapper[4778]: I0312 13:27:05.737923 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" event={"ID":"34bbdc16-4518-4ee5-9a70-3cedcc5f0159","Type":"ContainerStarted","Data":"b0cb81f3a436a1fa4c7d4d187d2d378e6115e3c52c453018d4a774d9e5554128"} Mar 12 13:27:12 crc kubenswrapper[4778]: I0312 13:27:12.873287 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" event={"ID":"34bbdc16-4518-4ee5-9a70-3cedcc5f0159","Type":"ContainerStarted","Data":"718457d5ecf5be484d18fcb2d14c3369f49cc01d76d7c6800297d67b914af438"} Mar 12 13:27:12 crc kubenswrapper[4778]: I0312 13:27:12.873826 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" Mar 12 13:27:24 crc kubenswrapper[4778]: I0312 13:27:24.624300 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" Mar 12 13:27:24 crc kubenswrapper[4778]: I0312 13:27:24.655996 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5bc4df7446-x9bsl" podStartSLOduration=14.080995289 podStartE2EDuration="21.655977517s" podCreationTimestamp="2026-03-12 13:27:03 +0000 UTC" firstStartedPulling="2026-03-12 13:27:05.048986409 +0000 UTC m=+1043.497681795" lastFinishedPulling="2026-03-12 13:27:12.623968627 +0000 UTC m=+1051.072664023" observedRunningTime="2026-03-12 13:27:12.91531487 +0000 UTC m=+1051.364010276" watchObservedRunningTime="2026-03-12 13:27:24.655977517 +0000 UTC m=+1063.104672913" Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.836767 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2"] Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.838848 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.846419 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2"] Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.865520 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wq5gn" Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.886206 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc"] Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.887094 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.887838 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvw2p\" (UniqueName: \"kubernetes.io/projected/ffb8a1f4-4533-4368-a900-95d37fe1d3ad-kube-api-access-dvw2p\") pod \"barbican-operator-controller-manager-677bd678f7-6h2c2\" (UID: \"ffb8a1f4-4533-4368-a900-95d37fe1d3ad\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.887966 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljqr8\" (UniqueName: \"kubernetes.io/projected/c8818ac0-af8b-42c9-a923-425fe79ed203-kube-api-access-ljqr8\") pod \"cinder-operator-controller-manager-984cd4dcf-xm4cc\" (UID: \"c8818ac0-af8b-42c9-a923-425fe79ed203\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.895919 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-c2m28" Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.992356 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvw2p\" (UniqueName: \"kubernetes.io/projected/ffb8a1f4-4533-4368-a900-95d37fe1d3ad-kube-api-access-dvw2p\") pod \"barbican-operator-controller-manager-677bd678f7-6h2c2\" (UID: \"ffb8a1f4-4533-4368-a900-95d37fe1d3ad\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" Mar 12 13:27:47 crc kubenswrapper[4778]: I0312 13:27:47.992471 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljqr8\" (UniqueName: \"kubernetes.io/projected/c8818ac0-af8b-42c9-a923-425fe79ed203-kube-api-access-ljqr8\") pod \"cinder-operator-controller-manager-984cd4dcf-xm4cc\" (UID: \"c8818ac0-af8b-42c9-a923-425fe79ed203\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.218482 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.219575 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.238566 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.257503 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qz45z" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.281352 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.291815 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.292957 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.317019 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvw2p\" (UniqueName: \"kubernetes.io/projected/ffb8a1f4-4533-4368-a900-95d37fe1d3ad-kube-api-access-dvw2p\") pod \"barbican-operator-controller-manager-677bd678f7-6h2c2\" (UID: \"ffb8a1f4-4533-4368-a900-95d37fe1d3ad\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.317048 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljqr8\" (UniqueName: \"kubernetes.io/projected/c8818ac0-af8b-42c9-a923-425fe79ed203-kube-api-access-ljqr8\") pod \"cinder-operator-controller-manager-984cd4dcf-xm4cc\" (UID: \"c8818ac0-af8b-42c9-a923-425fe79ed203\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.362587 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-7zn8q" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.405071 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.419035 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7lr5\" (UniqueName: \"kubernetes.io/projected/ad531191-d7c5-4ef6-9929-3a5869751d98-kube-api-access-j7lr5\") pod \"designate-operator-controller-manager-66d56f6ff4-9n6jv\" (UID: \"ad531191-d7c5-4ef6-9929-3a5869751d98\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.419114 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hllx\" (UniqueName: \"kubernetes.io/projected/db7f6b97-2903-44bf-803f-c00c337400b9-kube-api-access-4hllx\") pod \"glance-operator-controller-manager-5964f64c48-gknp2\" (UID: \"db7f6b97-2903-44bf-803f-c00c337400b9\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.443533 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.444738 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.448621 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dmq5h" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.469424 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.479253 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.480308 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.500470 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-54tdf" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.512733 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.523921 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.525793 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8jj6\" (UniqueName: \"kubernetes.io/projected/e290c1ea-a39d-451e-a24b-17a2b61ff6f0-kube-api-access-g8jj6\") pod \"heat-operator-controller-manager-77b6666d85-b7tkm\" (UID: \"e290c1ea-a39d-451e-a24b-17a2b61ff6f0\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.525841 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7lr5\" (UniqueName: \"kubernetes.io/projected/ad531191-d7c5-4ef6-9929-3a5869751d98-kube-api-access-j7lr5\") pod \"designate-operator-controller-manager-66d56f6ff4-9n6jv\" (UID: \"ad531191-d7c5-4ef6-9929-3a5869751d98\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.525873 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2pb\" (UniqueName: \"kubernetes.io/projected/4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71-kube-api-access-4r2pb\") pod \"horizon-operator-controller-manager-6d9d6b584d-4jgt8\" (UID: \"4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.525916 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hllx\" (UniqueName: \"kubernetes.io/projected/db7f6b97-2903-44bf-803f-c00c337400b9-kube-api-access-4hllx\") pod \"glance-operator-controller-manager-5964f64c48-gknp2\" (UID: \"db7f6b97-2903-44bf-803f-c00c337400b9\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.558958 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.618747 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hllx\" (UniqueName: \"kubernetes.io/projected/db7f6b97-2903-44bf-803f-c00c337400b9-kube-api-access-4hllx\") pod \"glance-operator-controller-manager-5964f64c48-gknp2\" (UID: \"db7f6b97-2903-44bf-803f-c00c337400b9\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.621856 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7lr5\" (UniqueName: \"kubernetes.io/projected/ad531191-d7c5-4ef6-9929-3a5869751d98-kube-api-access-j7lr5\") pod \"designate-operator-controller-manager-66d56f6ff4-9n6jv\" (UID: \"ad531191-d7c5-4ef6-9929-3a5869751d98\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.627893 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz"] Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.629914 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.630303 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8jj6\" (UniqueName: \"kubernetes.io/projected/e290c1ea-a39d-451e-a24b-17a2b61ff6f0-kube-api-access-g8jj6\") pod \"heat-operator-controller-manager-77b6666d85-b7tkm\" (UID: \"e290c1ea-a39d-451e-a24b-17a2b61ff6f0\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.630497 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2pb\" (UniqueName: \"kubernetes.io/projected/4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71-kube-api-access-4r2pb\") pod \"horizon-operator-controller-manager-6d9d6b584d-4jgt8\" (UID: \"4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.645510 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hb28j" Mar 12 13:27:48 crc kubenswrapper[4778]: I0312 13:27:48.650495 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.249043 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.258622 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.263472 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.265359 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpx49\" (UniqueName: \"kubernetes.io/projected/02bc06ca-f4e6-4fde-bd5d-882714d9652c-kube-api-access-vpx49\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.265437 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.279099 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8jj6\" (UniqueName: \"kubernetes.io/projected/e290c1ea-a39d-451e-a24b-17a2b61ff6f0-kube-api-access-g8jj6\") pod \"heat-operator-controller-manager-77b6666d85-b7tkm\" (UID: \"e290c1ea-a39d-451e-a24b-17a2b61ff6f0\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.372942 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpx49\" (UniqueName: \"kubernetes.io/projected/02bc06ca-f4e6-4fde-bd5d-882714d9652c-kube-api-access-vpx49\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.373373 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:49 crc kubenswrapper[4778]: E0312 13:27:49.373542 4778 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:49 crc kubenswrapper[4778]: E0312 13:27:49.373601 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert podName:02bc06ca-f4e6-4fde-bd5d-882714d9652c nodeName:}" failed. No retries permitted until 2026-03-12 13:27:49.873578983 +0000 UTC m=+1088.322274369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert") pod "infra-operator-controller-manager-5995f4446f-5d6qz" (UID: "02bc06ca-f4e6-4fde-bd5d-882714d9652c") : secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.373957 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.387972 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.399690 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpx49\" (UniqueName: \"kubernetes.io/projected/02bc06ca-f4e6-4fde-bd5d-882714d9652c-kube-api-access-vpx49\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.404328 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.410308 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.410623 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gzpz9" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.418109 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.451120 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2pb\" (UniqueName: \"kubernetes.io/projected/4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71-kube-api-access-4r2pb\") pod \"horizon-operator-controller-manager-6d9d6b584d-4jgt8\" (UID: \"4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.451448 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6hpjb" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.550595 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.717596 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.724351 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.725392 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.726329 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.726924 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.728434 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj8wf\" (UniqueName: \"kubernetes.io/projected/98a4cfbd-3037-48b5-9047-5d574dcc0aca-kube-api-access-mj8wf\") pod \"ironic-operator-controller-manager-6bbb499bbc-qb8s8\" (UID: \"98a4cfbd-3037-48b5-9047-5d574dcc0aca\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.728496 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bjmh\" (UniqueName: \"kubernetes.io/projected/7e02c37f-b9af-46c9-a743-03ead9b060db-kube-api-access-8bjmh\") pod \"keystone-operator-controller-manager-684f77d66d-7dxdh\" (UID: \"7e02c37f-b9af-46c9-a743-03ead9b060db\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.742647 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.743700 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.757505 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.757581 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.758766 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.767673 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vbpwc" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.768127 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cgbz8" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.768265 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-92tx4" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.796221 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.816160 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.818269 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.823118 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9nj6x" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.831384 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bjmh\" (UniqueName: \"kubernetes.io/projected/7e02c37f-b9af-46c9-a743-03ead9b060db-kube-api-access-8bjmh\") pod \"keystone-operator-controller-manager-684f77d66d-7dxdh\" (UID: \"7e02c37f-b9af-46c9-a743-03ead9b060db\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.831468 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb75x\" (UniqueName: \"kubernetes.io/projected/5e38a4fd-95f8-437b-923b-eca33b1387e6-kube-api-access-wb75x\") pod \"manila-operator-controller-manager-68f45f9d9f-pn8tk\" (UID: \"5e38a4fd-95f8-437b-923b-eca33b1387e6\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.831527 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks66w\" (UniqueName: \"kubernetes.io/projected/2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f-kube-api-access-ks66w\") pod \"mariadb-operator-controller-manager-658d4cdd5-jlbft\" (UID: \"2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.831572 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw6mz\" (UniqueName: \"kubernetes.io/projected/d7288cc6-4247-4d03-bd37-9862243bf613-kube-api-access-qw6mz\") pod \"nova-operator-controller-manager-686d5f9fbd-vv9rc\" (UID: \"d7288cc6-4247-4d03-bd37-9862243bf613\") " pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.831598 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txndm\" (UniqueName: \"kubernetes.io/projected/076835c9-352b-4e40-80c4-3bce3bb80594-kube-api-access-txndm\") pod \"neutron-operator-controller-manager-776c5696bf-dd2ft\" (UID: \"076835c9-352b-4e40-80c4-3bce3bb80594\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.831644 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj8wf\" (UniqueName: \"kubernetes.io/projected/98a4cfbd-3037-48b5-9047-5d574dcc0aca-kube-api-access-mj8wf\") pod \"ironic-operator-controller-manager-6bbb499bbc-qb8s8\" (UID: \"98a4cfbd-3037-48b5-9047-5d574dcc0aca\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.854366 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.856683 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj8wf\" (UniqueName: \"kubernetes.io/projected/98a4cfbd-3037-48b5-9047-5d574dcc0aca-kube-api-access-mj8wf\") pod \"ironic-operator-controller-manager-6bbb499bbc-qb8s8\" (UID: \"98a4cfbd-3037-48b5-9047-5d574dcc0aca\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.872478 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.873297 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.881374 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-j5jx9" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.885598 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bjmh\" (UniqueName: \"kubernetes.io/projected/7e02c37f-b9af-46c9-a743-03ead9b060db-kube-api-access-8bjmh\") pod \"keystone-operator-controller-manager-684f77d66d-7dxdh\" (UID: \"7e02c37f-b9af-46c9-a743-03ead9b060db\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.907689 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.933569 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks66w\" (UniqueName: \"kubernetes.io/projected/2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f-kube-api-access-ks66w\") pod \"mariadb-operator-controller-manager-658d4cdd5-jlbft\" (UID: \"2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.933626 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw6mz\" (UniqueName: \"kubernetes.io/projected/d7288cc6-4247-4d03-bd37-9862243bf613-kube-api-access-qw6mz\") pod \"nova-operator-controller-manager-686d5f9fbd-vv9rc\" (UID: \"d7288cc6-4247-4d03-bd37-9862243bf613\") " pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.933647 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txndm\" (UniqueName: \"kubernetes.io/projected/076835c9-352b-4e40-80c4-3bce3bb80594-kube-api-access-txndm\") pod \"neutron-operator-controller-manager-776c5696bf-dd2ft\" (UID: \"076835c9-352b-4e40-80c4-3bce3bb80594\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.933709 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.933741 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb75x\" (UniqueName: \"kubernetes.io/projected/5e38a4fd-95f8-437b-923b-eca33b1387e6-kube-api-access-wb75x\") pod \"manila-operator-controller-manager-68f45f9d9f-pn8tk\" (UID: \"5e38a4fd-95f8-437b-923b-eca33b1387e6\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.933768 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sl2m\" (UniqueName: \"kubernetes.io/projected/1a01d06c-be6f-45de-a22d-c8f1058a3a84-kube-api-access-5sl2m\") pod \"octavia-operator-controller-manager-5f4f55cb5c-cdgg9\" (UID: \"1a01d06c-be6f-45de-a22d-c8f1058a3a84\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" Mar 12 13:27:49 crc kubenswrapper[4778]: E0312 13:27:49.934237 4778 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:49 crc kubenswrapper[4778]: E0312 13:27:49.934278 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert podName:02bc06ca-f4e6-4fde-bd5d-882714d9652c nodeName:}" failed. No retries permitted until 2026-03-12 13:27:50.934262808 +0000 UTC m=+1089.382958204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert") pod "infra-operator-controller-manager-5995f4446f-5d6qz" (UID: "02bc06ca-f4e6-4fde-bd5d-882714d9652c") : secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.955746 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw6mz\" (UniqueName: \"kubernetes.io/projected/d7288cc6-4247-4d03-bd37-9862243bf613-kube-api-access-qw6mz\") pod \"nova-operator-controller-manager-686d5f9fbd-vv9rc\" (UID: \"d7288cc6-4247-4d03-bd37-9862243bf613\") " pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.971429 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.972515 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.976561 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks66w\" (UniqueName: \"kubernetes.io/projected/2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f-kube-api-access-ks66w\") pod \"mariadb-operator-controller-manager-658d4cdd5-jlbft\" (UID: \"2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.976868 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mv9px" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.980464 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txndm\" (UniqueName: \"kubernetes.io/projected/076835c9-352b-4e40-80c4-3bce3bb80594-kube-api-access-txndm\") pod \"neutron-operator-controller-manager-776c5696bf-dd2ft\" (UID: \"076835c9-352b-4e40-80c4-3bce3bb80594\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.984923 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.987632 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6"] Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.991955 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb75x\" (UniqueName: \"kubernetes.io/projected/5e38a4fd-95f8-437b-923b-eca33b1387e6-kube-api-access-wb75x\") pod \"manila-operator-controller-manager-68f45f9d9f-pn8tk\" (UID: \"5e38a4fd-95f8-437b-923b-eca33b1387e6\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" Mar 12 13:27:49 crc kubenswrapper[4778]: I0312 13:27:49.995691 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.006760 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lmw9q" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.006933 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.016357 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.021869 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.023100 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.025474 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.025615 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-m7kt4" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.026148 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.031833 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-84mps"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.033614 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.038221 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sl2m\" (UniqueName: \"kubernetes.io/projected/1a01d06c-be6f-45de-a22d-c8f1058a3a84-kube-api-access-5sl2m\") pod \"octavia-operator-controller-manager-5f4f55cb5c-cdgg9\" (UID: \"1a01d06c-be6f-45de-a22d-c8f1058a3a84\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.041134 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fdhvv" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.119853 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.277252 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764zg\" (UniqueName: \"kubernetes.io/projected/52524252-25bd-49e5-822e-3d4668aff2f9-kube-api-access-764zg\") pod \"placement-operator-controller-manager-574d45c66c-wvpf8\" (UID: \"52524252-25bd-49e5-822e-3d4668aff2f9\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.278501 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.278993 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.278358 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sl2m\" (UniqueName: \"kubernetes.io/projected/1a01d06c-be6f-45de-a22d-c8f1058a3a84-kube-api-access-5sl2m\") pod \"octavia-operator-controller-manager-5f4f55cb5c-cdgg9\" (UID: \"1a01d06c-be6f-45de-a22d-c8f1058a3a84\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.279618 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.284317 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48j5\" (UniqueName: \"kubernetes.io/projected/4f7d316e-6896-4f84-8423-6f79778c1c6b-kube-api-access-n48j5\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.333029 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktk6s\" (UniqueName: \"kubernetes.io/projected/8d38fd7e-6fa1-4b0c-9c82-9c57290c7837-kube-api-access-ktk6s\") pod \"ovn-operator-controller-manager-bbc5b68f9-bbgmb\" (UID: \"8d38fd7e-6fa1-4b0c-9c82-9c57290c7837\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.333131 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446sv\" (UniqueName: \"kubernetes.io/projected/64a36384-f2e6-4077-b2ca-de2a6ce6ea06-kube-api-access-446sv\") pod \"swift-operator-controller-manager-677c674df7-84mps\" (UID: \"64a36384-f2e6-4077-b2ca-de2a6ce6ea06\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.333169 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.373501 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.754812 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.755127 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.755833 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.762512 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-84mps"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.772920 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktk6s\" (UniqueName: \"kubernetes.io/projected/8d38fd7e-6fa1-4b0c-9c82-9c57290c7837-kube-api-access-ktk6s\") pod \"ovn-operator-controller-manager-bbc5b68f9-bbgmb\" (UID: \"8d38fd7e-6fa1-4b0c-9c82-9c57290c7837\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.772954 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-446sv\" (UniqueName: \"kubernetes.io/projected/64a36384-f2e6-4077-b2ca-de2a6ce6ea06-kube-api-access-446sv\") pod \"swift-operator-controller-manager-677c674df7-84mps\" (UID: \"64a36384-f2e6-4077-b2ca-de2a6ce6ea06\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.772986 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.773029 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764zg\" (UniqueName: \"kubernetes.io/projected/52524252-25bd-49e5-822e-3d4668aff2f9-kube-api-access-764zg\") pod \"placement-operator-controller-manager-574d45c66c-wvpf8\" (UID: \"52524252-25bd-49e5-822e-3d4668aff2f9\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.773059 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n48j5\" (UniqueName: \"kubernetes.io/projected/4f7d316e-6896-4f84-8423-6f79778c1c6b-kube-api-access-n48j5\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:50 crc kubenswrapper[4778]: E0312 13:27:50.773356 4778 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:50 crc kubenswrapper[4778]: E0312 13:27:50.773399 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert podName:4f7d316e-6896-4f84-8423-6f79778c1c6b nodeName:}" failed. No retries permitted until 2026-03-12 13:27:51.273386589 +0000 UTC m=+1089.722081975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" (UID: "4f7d316e-6896-4f84-8423-6f79778c1c6b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.784356 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-j8lzm" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.784653 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.793950 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.795474 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.795622 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.804532 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6fvqh" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.824220 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.825826 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764zg\" (UniqueName: \"kubernetes.io/projected/52524252-25bd-49e5-822e-3d4668aff2f9-kube-api-access-764zg\") pod \"placement-operator-controller-manager-574d45c66c-wvpf8\" (UID: \"52524252-25bd-49e5-822e-3d4668aff2f9\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.826782 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-446sv\" (UniqueName: \"kubernetes.io/projected/64a36384-f2e6-4077-b2ca-de2a6ce6ea06-kube-api-access-446sv\") pod \"swift-operator-controller-manager-677c674df7-84mps\" (UID: \"64a36384-f2e6-4077-b2ca-de2a6ce6ea06\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.839200 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n48j5\" (UniqueName: \"kubernetes.io/projected/4f7d316e-6896-4f84-8423-6f79778c1c6b-kube-api-access-n48j5\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.844947 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktk6s\" (UniqueName: \"kubernetes.io/projected/8d38fd7e-6fa1-4b0c-9c82-9c57290c7837-kube-api-access-ktk6s\") pod \"ovn-operator-controller-manager-bbc5b68f9-bbgmb\" (UID: \"8d38fd7e-6fa1-4b0c-9c82-9c57290c7837\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.854609 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.854696 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.859644 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pvdlr" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.873857 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.874204 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.875667 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ql6\" (UniqueName: \"kubernetes.io/projected/6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c-kube-api-access-d2ql6\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-gfv5z\" (UID: \"6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.889130 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.889891 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.897559 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.915757 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc"] Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.938815 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j8vfd" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.938885 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.939002 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.978384 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.978443 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ql6\" (UniqueName: \"kubernetes.io/projected/6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c-kube-api-access-d2ql6\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-gfv5z\" (UID: \"6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.978463 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.978493 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7fsj\" (UniqueName: \"kubernetes.io/projected/8c02ecb8-0e15-4672-823a-c4437ca5bf8c-kube-api-access-j7fsj\") pod \"watcher-operator-controller-manager-6dd88c6f67-2tjsk\" (UID: \"8c02ecb8-0e15-4672-823a-c4437ca5bf8c\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.978520 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gz2m\" (UniqueName: \"kubernetes.io/projected/ed9b9271-4ae9-440a-9411-15d46267106e-kube-api-access-9gz2m\") pod \"test-operator-controller-manager-5c5cb9c4d7-pcfrz\" (UID: \"ed9b9271-4ae9-440a-9411-15d46267106e\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.978537 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" Mar 12 13:27:50 crc kubenswrapper[4778]: E0312 13:27:50.978655 4778 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:50 crc kubenswrapper[4778]: E0312 13:27:50.978697 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert podName:02bc06ca-f4e6-4fde-bd5d-882714d9652c nodeName:}" failed. No retries permitted until 2026-03-12 13:27:52.978683117 +0000 UTC m=+1091.427378513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert") pod "infra-operator-controller-manager-5995f4446f-5d6qz" (UID: "02bc06ca-f4e6-4fde-bd5d-882714d9652c") : secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.978550 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:50 crc kubenswrapper[4778]: I0312 13:27:50.979243 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbp8p\" (UniqueName: \"kubernetes.io/projected/d0784623-5f08-4109-9c7e-0a329210ce07-kube-api-access-vbp8p\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.010879 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ql6\" (UniqueName: \"kubernetes.io/projected/6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c-kube-api-access-d2ql6\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-gfv5z\" (UID: \"6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.024359 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm"] Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.068098 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b"] Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.069065 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.072261 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-625vs" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.072595 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.080696 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gz2m\" (UniqueName: \"kubernetes.io/projected/ed9b9271-4ae9-440a-9411-15d46267106e-kube-api-access-9gz2m\") pod \"test-operator-controller-manager-5c5cb9c4d7-pcfrz\" (UID: \"ed9b9271-4ae9-440a-9411-15d46267106e\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.080764 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbp8p\" (UniqueName: \"kubernetes.io/projected/d0784623-5f08-4109-9c7e-0a329210ce07-kube-api-access-vbp8p\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.080809 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.080846 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.080875 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7fsj\" (UniqueName: \"kubernetes.io/projected/8c02ecb8-0e15-4672-823a-c4437ca5bf8c-kube-api-access-j7fsj\") pod \"watcher-operator-controller-manager-6dd88c6f67-2tjsk\" (UID: \"8c02ecb8-0e15-4672-823a-c4437ca5bf8c\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.081140 4778 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.081177 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:27:51.58116298 +0000 UTC m=+1090.029858376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "webhook-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.081279 4778 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.081301 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:27:51.581295134 +0000 UTC m=+1090.029990530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "metrics-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.089292 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.094732 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b"] Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.098710 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7fsj\" (UniqueName: \"kubernetes.io/projected/8c02ecb8-0e15-4672-823a-c4437ca5bf8c-kube-api-access-j7fsj\") pod \"watcher-operator-controller-manager-6dd88c6f67-2tjsk\" (UID: \"8c02ecb8-0e15-4672-823a-c4437ca5bf8c\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.112334 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbp8p\" (UniqueName: \"kubernetes.io/projected/d0784623-5f08-4109-9c7e-0a329210ce07-kube-api-access-vbp8p\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.115205 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gz2m\" (UniqueName: \"kubernetes.io/projected/ed9b9271-4ae9-440a-9411-15d46267106e-kube-api-access-9gz2m\") pod \"test-operator-controller-manager-5c5cb9c4d7-pcfrz\" (UID: \"ed9b9271-4ae9-440a-9411-15d46267106e\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" Mar 12 13:27:51 crc kubenswrapper[4778]: W0312 13:27:51.153686 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode290c1ea_a39d_451e_a24b_17a2b61ff6f0.slice/crio-29b3719bed2f4fb41361bea063a0c860366f43adee3b02d65b29b84f9aa6079b WatchSource:0}: Error finding container 29b3719bed2f4fb41361bea063a0c860366f43adee3b02d65b29b84f9aa6079b: Status 404 returned error can't find the container with id 29b3719bed2f4fb41361bea063a0c860366f43adee3b02d65b29b84f9aa6079b Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.533271 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.533742 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.552339 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.552467 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldr22\" (UniqueName: \"kubernetes.io/projected/034f39d8-a33e-4e37-bcde-51fb22debdd1-kube-api-access-ldr22\") pod \"rabbitmq-cluster-operator-manager-668c99d594-shf7b\" (UID: \"034f39d8-a33e-4e37-bcde-51fb22debdd1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.552677 4778 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.552732 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert podName:4f7d316e-6896-4f84-8423-6f79778c1c6b nodeName:}" failed. No retries permitted until 2026-03-12 13:27:52.552714644 +0000 UTC m=+1091.001410030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" (UID: "4f7d316e-6896-4f84-8423-6f79778c1c6b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.637850 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8"] Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.659718 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.660152 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldr22\" (UniqueName: \"kubernetes.io/projected/034f39d8-a33e-4e37-bcde-51fb22debdd1-kube-api-access-ldr22\") pod \"rabbitmq-cluster-operator-manager-668c99d594-shf7b\" (UID: \"034f39d8-a33e-4e37-bcde-51fb22debdd1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.660298 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.659860 4778 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.661241 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:27:52.660817118 +0000 UTC m=+1091.109512514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "metrics-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.661393 4778 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: E0312 13:27:51.661423 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:27:52.661412925 +0000 UTC m=+1091.110108311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "webhook-server-cert" not found Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.834136 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldr22\" (UniqueName: \"kubernetes.io/projected/034f39d8-a33e-4e37-bcde-51fb22debdd1-kube-api-access-ldr22\") pod \"rabbitmq-cluster-operator-manager-668c99d594-shf7b\" (UID: \"034f39d8-a33e-4e37-bcde-51fb22debdd1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.858255 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" event={"ID":"e290c1ea-a39d-451e-a24b-17a2b61ff6f0","Type":"ContainerStarted","Data":"29b3719bed2f4fb41361bea063a0c860366f43adee3b02d65b29b84f9aa6079b"} Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.861197 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" event={"ID":"ffb8a1f4-4533-4368-a900-95d37fe1d3ad","Type":"ContainerStarted","Data":"6add4468a8549d66173c2ee7bf0009205051ad85b83c631387eac0d7eac012be"} Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.863319 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" event={"ID":"c8818ac0-af8b-42c9-a923-425fe79ed203","Type":"ContainerStarted","Data":"dbd1b8e4778d42734e0b1ca1d10ae31267885adbdbe5b26c477fe6eac22e7012"} Mar 12 13:27:51 crc kubenswrapper[4778]: I0312 13:27:51.886879 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2"] Mar 12 13:27:51 crc kubenswrapper[4778]: W0312 13:27:51.891936 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c2bf703_ecc1_4bb1_aa03_a64e55dfdb71.slice/crio-afc92afd611cd768f64e952660cb89d76a0eb88173655f02b34c045043ee1d51 WatchSource:0}: Error finding container afc92afd611cd768f64e952660cb89d76a0eb88173655f02b34c045043ee1d51: Status 404 returned error can't find the container with id afc92afd611cd768f64e952660cb89d76a0eb88173655f02b34c045043ee1d51 Mar 12 13:27:52 crc kubenswrapper[4778]: I0312 13:27:52.274870 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" Mar 12 13:27:52 crc kubenswrapper[4778]: I0312 13:27:52.554778 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:52 crc kubenswrapper[4778]: E0312 13:27:52.555013 4778 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:52 crc kubenswrapper[4778]: E0312 13:27:52.555060 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert podName:4f7d316e-6896-4f84-8423-6f79778c1c6b nodeName:}" failed. No retries permitted until 2026-03-12 13:27:54.555046181 +0000 UTC m=+1093.003741577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" (UID: "4f7d316e-6896-4f84-8423-6f79778c1c6b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:52 crc kubenswrapper[4778]: I0312 13:27:52.669434 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:52 crc kubenswrapper[4778]: I0312 13:27:52.669783 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:52 crc kubenswrapper[4778]: E0312 13:27:52.670124 4778 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 13:27:52 crc kubenswrapper[4778]: E0312 13:27:52.670226 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:27:54.670204217 +0000 UTC m=+1093.118899613 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "webhook-server-cert" not found Mar 12 13:27:52 crc kubenswrapper[4778]: E0312 13:27:52.670820 4778 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 13:27:52 crc kubenswrapper[4778]: E0312 13:27:52.670865 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:27:54.670850555 +0000 UTC m=+1093.119545951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "metrics-server-cert" not found Mar 12 13:27:52 crc kubenswrapper[4778]: I0312 13:27:52.689129 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv"] Mar 12 13:27:52 crc kubenswrapper[4778]: I0312 13:27:52.988321 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" event={"ID":"ad531191-d7c5-4ef6-9929-3a5869751d98","Type":"ContainerStarted","Data":"03fd176a7780ae59f406e8483eca6a0c49fd1aa0b68aa0a58f0ac7631dbdd75a"} Mar 12 13:27:52 crc kubenswrapper[4778]: I0312 13:27:52.989828 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" event={"ID":"db7f6b97-2903-44bf-803f-c00c337400b9","Type":"ContainerStarted","Data":"462202b2866624ca0ac8bfe0fcd0546313eca0b1abcde399dfb254602bc2653a"} Mar 12 13:27:52 crc kubenswrapper[4778]: I0312 13:27:52.992337 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" event={"ID":"4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71","Type":"ContainerStarted","Data":"afc92afd611cd768f64e952660cb89d76a0eb88173655f02b34c045043ee1d51"} Mar 12 13:27:53 crc kubenswrapper[4778]: I0312 13:27:53.022768 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:53 crc kubenswrapper[4778]: E0312 13:27:53.022941 4778 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:53 crc kubenswrapper[4778]: E0312 13:27:53.022985 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert podName:02bc06ca-f4e6-4fde-bd5d-882714d9652c nodeName:}" failed. No retries permitted until 2026-03-12 13:27:57.022971352 +0000 UTC m=+1095.471666748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert") pod "infra-operator-controller-manager-5995f4446f-5d6qz" (UID: "02bc06ca-f4e6-4fde-bd5d-882714d9652c") : secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:53 crc kubenswrapper[4778]: I0312 13:27:53.103050 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8"] Mar 12 13:27:53 crc kubenswrapper[4778]: W0312 13:27:53.334396 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98a4cfbd_3037_48b5_9047_5d574dcc0aca.slice/crio-d86fe0c7300172809b88b6373a2c044ec040f3664871890f725be0ab41c29b5d WatchSource:0}: Error finding container d86fe0c7300172809b88b6373a2c044ec040f3664871890f725be0ab41c29b5d: Status 404 returned error can't find the container with id d86fe0c7300172809b88b6373a2c044ec040f3664871890f725be0ab41c29b5d Mar 12 13:27:53 crc kubenswrapper[4778]: I0312 13:27:53.796030 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc"] Mar 12 13:27:53 crc kubenswrapper[4778]: I0312 13:27:53.976292 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh"] Mar 12 13:27:53 crc kubenswrapper[4778]: I0312 13:27:53.990466 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk"] Mar 12 13:27:53 crc kubenswrapper[4778]: I0312 13:27:53.996693 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft"] Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.035937 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" event={"ID":"7e02c37f-b9af-46c9-a743-03ead9b060db","Type":"ContainerStarted","Data":"bb467c1fe68b7f46d0ab121ad5687ab6f4dfc931fbf41a29126eda47416b8c3b"} Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.067431 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" event={"ID":"98a4cfbd-3037-48b5-9047-5d574dcc0aca","Type":"ContainerStarted","Data":"d86fe0c7300172809b88b6373a2c044ec040f3664871890f725be0ab41c29b5d"} Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.525660 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9"] Mar 12 13:27:54 crc kubenswrapper[4778]: W0312 13:27:54.545480 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a01d06c_be6f_45de_a22d_c8f1058a3a84.slice/crio-893de7cce2036fa6907343b24e58e80f6d7c444f81fe16929220418fbe2057d2 WatchSource:0}: Error finding container 893de7cce2036fa6907343b24e58e80f6d7c444f81fe16929220418fbe2057d2: Status 404 returned error can't find the container with id 893de7cce2036fa6907343b24e58e80f6d7c444f81fe16929220418fbe2057d2 Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.705982 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.706027 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.706066 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:54 crc kubenswrapper[4778]: E0312 13:27:54.706200 4778 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 13:27:54 crc kubenswrapper[4778]: E0312 13:27:54.706594 4778 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:54 crc kubenswrapper[4778]: E0312 13:27:54.706634 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert podName:4f7d316e-6896-4f84-8423-6f79778c1c6b nodeName:}" failed. No retries permitted until 2026-03-12 13:27:58.706620866 +0000 UTC m=+1097.155316262 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" (UID: "4f7d316e-6896-4f84-8423-6f79778c1c6b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:54 crc kubenswrapper[4778]: E0312 13:27:54.706905 4778 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 13:27:54 crc kubenswrapper[4778]: E0312 13:27:54.706930 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:27:58.706922515 +0000 UTC m=+1097.155617911 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "webhook-server-cert" not found Mar 12 13:27:54 crc kubenswrapper[4778]: E0312 13:27:54.706952 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:27:58.706945356 +0000 UTC m=+1097.155640752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "metrics-server-cert" not found Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.802900 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8"] Mar 12 13:27:54 crc kubenswrapper[4778]: W0312 13:27:54.822957 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52524252_25bd_49e5_822e_3d4668aff2f9.slice/crio-a3262b7c97c85db32c9ca2463df50a7754cb0cf38f9f4881e25a720c68d6db8e WatchSource:0}: Error finding container a3262b7c97c85db32c9ca2463df50a7754cb0cf38f9f4881e25a720c68d6db8e: Status 404 returned error can't find the container with id a3262b7c97c85db32c9ca2463df50a7754cb0cf38f9f4881e25a720c68d6db8e Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.835451 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb"] Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.858776 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft"] Mar 12 13:27:54 crc kubenswrapper[4778]: I0312 13:27:54.877085 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z"] Mar 12 13:27:54 crc kubenswrapper[4778]: W0312 13:27:54.885309 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d577800_0ee1_4fe5_a7fb_8794fb8c4c6f.slice/crio-1f781822864f6ccaee041de62ccaae7e12f9cfdd25380060e8173a78f59408eb WatchSource:0}: Error finding container 1f781822864f6ccaee041de62ccaae7e12f9cfdd25380060e8173a78f59408eb: Status 404 returned error can't find the container with id 1f781822864f6ccaee041de62ccaae7e12f9cfdd25380060e8173a78f59408eb Mar 12 13:27:54 crc kubenswrapper[4778]: W0312 13:27:54.893361 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad9bf9f_7214_44bc_a65d_1dcbf385fc2c.slice/crio-66a35a20b3f89741667ee8b69c74ea7489e9678aad111e22ae5c6b16591bf61c WatchSource:0}: Error finding container 66a35a20b3f89741667ee8b69c74ea7489e9678aad111e22ae5c6b16591bf61c: Status 404 returned error can't find the container with id 66a35a20b3f89741667ee8b69c74ea7489e9678aad111e22ae5c6b16591bf61c Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.189152 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" event={"ID":"d7288cc6-4247-4d03-bd37-9862243bf613","Type":"ContainerStarted","Data":"15b7f4fe8a5cfc1726f6b38dfbf4ec237beccfbdf2b5296e977aa567cf3ea730"} Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.210321 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" event={"ID":"52524252-25bd-49e5-822e-3d4668aff2f9","Type":"ContainerStarted","Data":"a3262b7c97c85db32c9ca2463df50a7754cb0cf38f9f4881e25a720c68d6db8e"} Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.214713 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz"] Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.215124 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" event={"ID":"1a01d06c-be6f-45de-a22d-c8f1058a3a84","Type":"ContainerStarted","Data":"893de7cce2036fa6907343b24e58e80f6d7c444f81fe16929220418fbe2057d2"} Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.220687 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" event={"ID":"076835c9-352b-4e40-80c4-3bce3bb80594","Type":"ContainerStarted","Data":"4c641f62d7c300472a2c0a75691207f30010390a30bbf20056fede889e396c80"} Mar 12 13:27:55 crc kubenswrapper[4778]: W0312 13:27:55.240755 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded9b9271_4ae9_440a_9411_15d46267106e.slice/crio-34158edfb6803522b4b6384fa1afdafb1dff9bfbaeeaa1e79de5c066d21cc548 WatchSource:0}: Error finding container 34158edfb6803522b4b6384fa1afdafb1dff9bfbaeeaa1e79de5c066d21cc548: Status 404 returned error can't find the container with id 34158edfb6803522b4b6384fa1afdafb1dff9bfbaeeaa1e79de5c066d21cc548 Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.244995 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" event={"ID":"5e38a4fd-95f8-437b-923b-eca33b1387e6","Type":"ContainerStarted","Data":"57c55a7e15d470c6379374ab81f4e9c300b6344a7c41c1839187c215c09c56a1"} Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.251911 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk"] Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.253715 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" event={"ID":"8d38fd7e-6fa1-4b0c-9c82-9c57290c7837","Type":"ContainerStarted","Data":"f9ab4ff66ea9e47b1e89bf37ec2b3235d3ef42604bc55019cfd4909c095e9464"} Mar 12 13:27:55 crc kubenswrapper[4778]: W0312 13:27:55.257636 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c02ecb8_0e15_4672_823a_c4437ca5bf8c.slice/crio-706f249840e7b1d704766e2b3527abb545d3f6ae1d202df51e694140eef30e9b WatchSource:0}: Error finding container 706f249840e7b1d704766e2b3527abb545d3f6ae1d202df51e694140eef30e9b: Status 404 returned error can't find the container with id 706f249840e7b1d704766e2b3527abb545d3f6ae1d202df51e694140eef30e9b Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.260576 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b"] Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.268864 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" event={"ID":"2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f","Type":"ContainerStarted","Data":"1f781822864f6ccaee041de62ccaae7e12f9cfdd25380060e8173a78f59408eb"} Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.270562 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-84mps"] Mar 12 13:27:55 crc kubenswrapper[4778]: I0312 13:27:55.273962 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" event={"ID":"6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c","Type":"ContainerStarted","Data":"66a35a20b3f89741667ee8b69c74ea7489e9678aad111e22ae5c6b16591bf61c"} Mar 12 13:27:55 crc kubenswrapper[4778]: W0312 13:27:55.316913 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a36384_f2e6_4077_b2ca_de2a6ce6ea06.slice/crio-93919bb9cce297fa177b1164e22870e7a66bf860567fe714205d8702b80bd9a2 WatchSource:0}: Error finding container 93919bb9cce297fa177b1164e22870e7a66bf860567fe714205d8702b80bd9a2: Status 404 returned error can't find the container with id 93919bb9cce297fa177b1164e22870e7a66bf860567fe714205d8702b80bd9a2 Mar 12 13:27:55 crc kubenswrapper[4778]: W0312 13:27:55.338795 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod034f39d8_a33e_4e37_bcde_51fb22debdd1.slice/crio-7dba3190ef6cfd18ad725f320f6050312ce06ba17d117d060eed0d71ed1adc55 WatchSource:0}: Error finding container 7dba3190ef6cfd18ad725f320f6050312ce06ba17d117d060eed0d71ed1adc55: Status 404 returned error can't find the container with id 7dba3190ef6cfd18ad725f320f6050312ce06ba17d117d060eed0d71ed1adc55 Mar 12 13:27:56 crc kubenswrapper[4778]: I0312 13:27:56.388666 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" event={"ID":"034f39d8-a33e-4e37-bcde-51fb22debdd1","Type":"ContainerStarted","Data":"7dba3190ef6cfd18ad725f320f6050312ce06ba17d117d060eed0d71ed1adc55"} Mar 12 13:27:56 crc kubenswrapper[4778]: I0312 13:27:56.417426 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" event={"ID":"8c02ecb8-0e15-4672-823a-c4437ca5bf8c","Type":"ContainerStarted","Data":"706f249840e7b1d704766e2b3527abb545d3f6ae1d202df51e694140eef30e9b"} Mar 12 13:27:56 crc kubenswrapper[4778]: I0312 13:27:56.439107 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" event={"ID":"ed9b9271-4ae9-440a-9411-15d46267106e","Type":"ContainerStarted","Data":"34158edfb6803522b4b6384fa1afdafb1dff9bfbaeeaa1e79de5c066d21cc548"} Mar 12 13:27:56 crc kubenswrapper[4778]: I0312 13:27:56.440534 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" event={"ID":"64a36384-f2e6-4077-b2ca-de2a6ce6ea06","Type":"ContainerStarted","Data":"93919bb9cce297fa177b1164e22870e7a66bf860567fe714205d8702b80bd9a2"} Mar 12 13:27:57 crc kubenswrapper[4778]: I0312 13:27:57.084816 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:27:57 crc kubenswrapper[4778]: E0312 13:27:57.085054 4778 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:57 crc kubenswrapper[4778]: E0312 13:27:57.085114 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert podName:02bc06ca-f4e6-4fde-bd5d-882714d9652c nodeName:}" failed. No retries permitted until 2026-03-12 13:28:05.085096695 +0000 UTC m=+1103.533792091 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert") pod "infra-operator-controller-manager-5995f4446f-5d6qz" (UID: "02bc06ca-f4e6-4fde-bd5d-882714d9652c") : secret "infra-operator-webhook-server-cert" not found Mar 12 13:27:59 crc kubenswrapper[4778]: I0312 13:27:59.314409 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:59 crc kubenswrapper[4778]: I0312 13:27:59.315067 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:27:59 crc kubenswrapper[4778]: I0312 13:27:59.315121 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:27:59 crc kubenswrapper[4778]: E0312 13:27:59.343103 4778 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:59 crc kubenswrapper[4778]: E0312 13:27:59.343170 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert podName:4f7d316e-6896-4f84-8423-6f79778c1c6b nodeName:}" failed. No retries permitted until 2026-03-12 13:28:07.343153349 +0000 UTC m=+1105.791848745 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert") pod "openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" (UID: "4f7d316e-6896-4f84-8423-6f79778c1c6b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 12 13:27:59 crc kubenswrapper[4778]: E0312 13:27:59.353177 4778 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 12 13:27:59 crc kubenswrapper[4778]: E0312 13:27:59.353470 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:28:07.353445952 +0000 UTC m=+1105.802141348 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "metrics-server-cert" not found Mar 12 13:27:59 crc kubenswrapper[4778]: E0312 13:27:59.353200 4778 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 13:27:59 crc kubenswrapper[4778]: E0312 13:27:59.353700 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:28:07.353686899 +0000 UTC m=+1105.802382295 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "webhook-server-cert" not found Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.126907 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555368-d2cpg"] Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.127858 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.129998 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.130222 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.130986 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.143914 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555368-d2cpg"] Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.225595 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m2kn\" (UniqueName: \"kubernetes.io/projected/20d587ee-7b57-4b99-a800-c6d46322d799-kube-api-access-9m2kn\") pod \"auto-csr-approver-29555368-d2cpg\" (UID: \"20d587ee-7b57-4b99-a800-c6d46322d799\") " pod="openshift-infra/auto-csr-approver-29555368-d2cpg" Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.338303 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m2kn\" (UniqueName: \"kubernetes.io/projected/20d587ee-7b57-4b99-a800-c6d46322d799-kube-api-access-9m2kn\") pod \"auto-csr-approver-29555368-d2cpg\" (UID: \"20d587ee-7b57-4b99-a800-c6d46322d799\") " pod="openshift-infra/auto-csr-approver-29555368-d2cpg" Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.371695 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m2kn\" (UniqueName: \"kubernetes.io/projected/20d587ee-7b57-4b99-a800-c6d46322d799-kube-api-access-9m2kn\") pod \"auto-csr-approver-29555368-d2cpg\" (UID: \"20d587ee-7b57-4b99-a800-c6d46322d799\") " pod="openshift-infra/auto-csr-approver-29555368-d2cpg" Mar 12 13:28:00 crc kubenswrapper[4778]: I0312 13:28:00.530162 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" Mar 12 13:28:01 crc kubenswrapper[4778]: I0312 13:28:01.680903 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555368-d2cpg"] Mar 12 13:28:01 crc kubenswrapper[4778]: I0312 13:28:01.855517 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" event={"ID":"20d587ee-7b57-4b99-a800-c6d46322d799","Type":"ContainerStarted","Data":"de84ea928c21e7d851428ee60843720d656101db59c03c12935d28595c4b6525"} Mar 12 13:28:05 crc kubenswrapper[4778]: I0312 13:28:05.104723 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:28:05 crc kubenswrapper[4778]: I0312 13:28:05.121658 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02bc06ca-f4e6-4fde-bd5d-882714d9652c-cert\") pod \"infra-operator-controller-manager-5995f4446f-5d6qz\" (UID: \"02bc06ca-f4e6-4fde-bd5d-882714d9652c\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:28:05 crc kubenswrapper[4778]: I0312 13:28:05.156749 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:28:07 crc kubenswrapper[4778]: I0312 13:28:07.379310 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:28:07 crc kubenswrapper[4778]: I0312 13:28:07.379866 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:28:07 crc kubenswrapper[4778]: I0312 13:28:07.379915 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:28:07 crc kubenswrapper[4778]: E0312 13:28:07.380815 4778 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 12 13:28:07 crc kubenswrapper[4778]: E0312 13:28:07.382344 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs podName:d0784623-5f08-4109-9c7e-0a329210ce07 nodeName:}" failed. No retries permitted until 2026-03-12 13:28:23.382294431 +0000 UTC m=+1121.830989847 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs") pod "openstack-operator-controller-manager-5785b7957-7vdgw" (UID: "d0784623-5f08-4109-9c7e-0a329210ce07") : secret "webhook-server-cert" not found Mar 12 13:28:07 crc kubenswrapper[4778]: I0312 13:28:07.393761 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-metrics-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:28:07 crc kubenswrapper[4778]: I0312 13:28:07.440814 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4f7d316e-6896-4f84-8423-6f79778c1c6b-cert\") pod \"openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6\" (UID: \"4f7d316e-6896-4f84-8423-6f79778c1c6b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:28:07 crc kubenswrapper[4778]: I0312 13:28:07.684712 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:28:12 crc kubenswrapper[4778]: E0312 13:28:12.716062 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423" Mar 12 13:28:12 crc kubenswrapper[4778]: E0312 13:28:12.716714 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvw2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-677bd678f7-6h2c2_openstack-operators(ffb8a1f4-4533-4368-a900-95d37fe1d3ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:12 crc kubenswrapper[4778]: E0312 13:28:12.718227 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" podUID="ffb8a1f4-4533-4368-a900-95d37fe1d3ad" Mar 12 13:28:13 crc kubenswrapper[4778]: E0312 13:28:13.068211 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:571f369855b0891a2b14e54a4c1c5ae2fbbd5de4c8fddd48e81033aad4b26423\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" podUID="ffb8a1f4-4533-4368-a900-95d37fe1d3ad" Mar 12 13:28:17 crc kubenswrapper[4778]: E0312 13:28:17.218867 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6" Mar 12 13:28:17 crc kubenswrapper[4778]: E0312 13:28:17.219445 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4r2pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-6d9d6b584d-4jgt8_openstack-operators(4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:17 crc kubenswrapper[4778]: E0312 13:28:17.220627 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" podUID="4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71" Mar 12 13:28:18 crc kubenswrapper[4778]: E0312 13:28:18.023385 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6" Mar 12 13:28:18 crc kubenswrapper[4778]: E0312 13:28:18.023608 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8jj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-77b6666d85-b7tkm_openstack-operators(e290c1ea-a39d-451e-a24b-17a2b61ff6f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:18 crc kubenswrapper[4778]: E0312 13:28:18.025032 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" podUID="e290c1ea-a39d-451e-a24b-17a2b61ff6f0" Mar 12 13:28:18 crc kubenswrapper[4778]: E0312 13:28:18.106416 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:6c9aef12f50be0b974f5e35b0d69303e7f7b95e6db5d41bcdb2d9d1100e921a6\\\"\"" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" podUID="e290c1ea-a39d-451e-a24b-17a2b61ff6f0" Mar 12 13:28:18 crc kubenswrapper[4778]: E0312 13:28:18.106909 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:d9bffb59bb7f9f0a6cb103c3986fd2c1bdb13ce6349c39427a690858cbd754d6\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" podUID="4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71" Mar 12 13:28:18 crc kubenswrapper[4778]: E0312 13:28:18.883658 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7" Mar 12 13:28:18 crc kubenswrapper[4778]: E0312 13:28:18.883899 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7lr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66d56f6ff4-9n6jv_openstack-operators(ad531191-d7c5-4ef6-9929-3a5869751d98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:18 crc kubenswrapper[4778]: E0312 13:28:18.885013 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" podUID="ad531191-d7c5-4ef6-9929-3a5869751d98" Mar 12 13:28:19 crc kubenswrapper[4778]: E0312 13:28:19.109156 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" podUID="ad531191-d7c5-4ef6-9929-3a5869751d98" Mar 12 13:28:19 crc kubenswrapper[4778]: E0312 13:28:19.625901 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571" Mar 12 13:28:19 crc kubenswrapper[4778]: E0312 13:28:19.626415 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5sl2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4f55cb5c-cdgg9_openstack-operators(1a01d06c-be6f-45de-a22d-c8f1058a3a84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:19 crc kubenswrapper[4778]: E0312 13:28:19.627690 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" podUID="1a01d06c-be6f-45de-a22d-c8f1058a3a84" Mar 12 13:28:20 crc kubenswrapper[4778]: E0312 13:28:20.155370 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:18fe6f2f0be7e736db86ff2d600af12a753e14b0a03232ce4f03629a89905571\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" podUID="1a01d06c-be6f-45de-a22d-c8f1058a3a84" Mar 12 13:28:20 crc kubenswrapper[4778]: E0312 13:28:20.333150 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:7c0da25380c91ffd1940d75eaa71b6842a6a4cf4056e62d6b0d237897b74e4d9" Mar 12 13:28:20 crc kubenswrapper[4778]: E0312 13:28:20.333435 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:7c0da25380c91ffd1940d75eaa71b6842a6a4cf4056e62d6b0d237897b74e4d9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljqr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-984cd4dcf-xm4cc_openstack-operators(c8818ac0-af8b-42c9-a923-425fe79ed203): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:20 crc kubenswrapper[4778]: E0312 13:28:20.334642 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" podUID="c8818ac0-af8b-42c9-a923-425fe79ed203" Mar 12 13:28:21 crc kubenswrapper[4778]: E0312 13:28:21.009692 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554" Mar 12 13:28:21 crc kubenswrapper[4778]: E0312 13:28:21.009942 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7fsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6dd88c6f67-2tjsk_openstack-operators(8c02ecb8-0e15-4672-823a-c4437ca5bf8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:21 crc kubenswrapper[4778]: E0312 13:28:21.011168 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" podUID="8c02ecb8-0e15-4672-823a-c4437ca5bf8c" Mar 12 13:28:21 crc kubenswrapper[4778]: E0312 13:28:21.162640 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" podUID="8c02ecb8-0e15-4672-823a-c4437ca5bf8c" Mar 12 13:28:21 crc kubenswrapper[4778]: E0312 13:28:21.163956 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:7c0da25380c91ffd1940d75eaa71b6842a6a4cf4056e62d6b0d237897b74e4d9\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" podUID="c8818ac0-af8b-42c9-a923-425fe79ed203" Mar 12 13:28:22 crc kubenswrapper[4778]: E0312 13:28:22.140868 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978" Mar 12 13:28:22 crc kubenswrapper[4778]: E0312 13:28:22.141129 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-764zg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-574d45c66c-wvpf8_openstack-operators(52524252-25bd-49e5-822e-3d4668aff2f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:22 crc kubenswrapper[4778]: E0312 13:28:22.142409 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" podUID="52524252-25bd-49e5-822e-3d4668aff2f9" Mar 12 13:28:22 crc kubenswrapper[4778]: E0312 13:28:22.167923 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" podUID="52524252-25bd-49e5-822e-3d4668aff2f9" Mar 12 13:28:23 crc kubenswrapper[4778]: E0312 13:28:23.339924 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4" Mar 12 13:28:23 crc kubenswrapper[4778]: E0312 13:28:23.340664 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wb75x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-68f45f9d9f-pn8tk_openstack-operators(5e38a4fd-95f8-437b-923b-eca33b1387e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:23 crc kubenswrapper[4778]: E0312 13:28:23.341970 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" podUID="5e38a4fd-95f8-437b-923b-eca33b1387e6" Mar 12 13:28:23 crc kubenswrapper[4778]: I0312 13:28:23.400393 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:28:23 crc kubenswrapper[4778]: I0312 13:28:23.407658 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0784623-5f08-4109-9c7e-0a329210ce07-webhook-certs\") pod \"openstack-operator-controller-manager-5785b7957-7vdgw\" (UID: \"d0784623-5f08-4109-9c7e-0a329210ce07\") " pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:28:23 crc kubenswrapper[4778]: I0312 13:28:23.707768 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:28:24 crc kubenswrapper[4778]: E0312 13:28:24.244924 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:d89f3ca6e909f34d145a880829f5e63f1b6b2d11c520a9c5bea7ed1c30ce38f4\\\"\"" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" podUID="5e38a4fd-95f8-437b-923b-eca33b1387e6" Mar 12 13:28:24 crc kubenswrapper[4778]: E0312 13:28:24.592168 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f" Mar 12 13:28:24 crc kubenswrapper[4778]: E0312 13:28:24.592521 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mj8wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6bbb499bbc-qb8s8_openstack-operators(98a4cfbd-3037-48b5-9047-5d574dcc0aca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:24 crc kubenswrapper[4778]: E0312 13:28:24.593867 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" podUID="98a4cfbd-3037-48b5-9047-5d574dcc0aca" Mar 12 13:28:25 crc kubenswrapper[4778]: E0312 13:28:25.264567 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" podUID="98a4cfbd-3037-48b5-9047-5d574dcc0aca" Mar 12 13:28:25 crc kubenswrapper[4778]: E0312 13:28:25.563108 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f" Mar 12 13:28:25 crc kubenswrapper[4778]: E0312 13:28:25.563446 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktk6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bbc5b68f9-bbgmb_openstack-operators(8d38fd7e-6fa1-4b0c-9c82-9c57290c7837): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:25 crc kubenswrapper[4778]: E0312 13:28:25.564789 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" podUID="8d38fd7e-6fa1-4b0c-9c82-9c57290c7837" Mar 12 13:28:26 crc kubenswrapper[4778]: I0312 13:28:26.255873 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:28:26 crc kubenswrapper[4778]: E0312 13:28:26.267980 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2f63ddf5c95c6c82f6e04bc9f7f20d56dc003614647726ab00276239eec40b7f\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" podUID="8d38fd7e-6fa1-4b0c-9c82-9c57290c7837" Mar 12 13:28:28 crc kubenswrapper[4778]: E0312 13:28:28.048144 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721" Mar 12 13:28:28 crc kubenswrapper[4778]: E0312 13:28:28.048671 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-txndm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-776c5696bf-dd2ft_openstack-operators(076835c9-352b-4e40-80c4-3bce3bb80594): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:28 crc kubenswrapper[4778]: E0312 13:28:28.049898 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" podUID="076835c9-352b-4e40-80c4-3bce3bb80594" Mar 12 13:28:28 crc kubenswrapper[4778]: E0312 13:28:28.303643 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:5fe5351a3de5e1267112d52cd81477a01d47f90be713cc5439c76543a4c33721\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" podUID="076835c9-352b-4e40-80c4-3bce3bb80594" Mar 12 13:28:28 crc kubenswrapper[4778]: I0312 13:28:28.558240 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:28:28 crc kubenswrapper[4778]: I0312 13:28:28.558305 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:28:29 crc kubenswrapper[4778]: E0312 13:28:29.079850 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d" Mar 12 13:28:29 crc kubenswrapper[4778]: E0312 13:28:29.080044 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d2ql6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6cd66dbd4b-gfv5z_openstack-operators(6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:29 crc kubenswrapper[4778]: E0312 13:28:29.081295 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" podUID="6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c" Mar 12 13:28:29 crc kubenswrapper[4778]: E0312 13:28:29.309078 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:27c84b712abc2df6108e22636075eec25fea0229800f38594a492fd41b02c49d\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" podUID="6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.105576 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.106860 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldr22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-shf7b_openstack-operators(034f39d8-a33e-4e37-bcde-51fb22debdd1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.108106 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" podUID="034f39d8-a33e-4e37-bcde-51fb22debdd1" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.319082 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" podUID="034f39d8-a33e-4e37-bcde-51fb22debdd1" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.595050 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.595256 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8bjmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-684f77d66d-7dxdh_openstack-operators(7e02c37f-b9af-46c9-a743-03ead9b060db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.596456 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" podUID="7e02c37f-b9af-46c9-a743-03ead9b060db" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.684825 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.97:5001/openstack-k8s-operators/nova-operator:8734adf928be66fa1f808466edcc3ea058f7094f" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.685538 4778 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.97:5001/openstack-k8s-operators/nova-operator:8734adf928be66fa1f808466edcc3ea058f7094f" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.685711 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.97:5001/openstack-k8s-operators/nova-operator:8734adf928be66fa1f808466edcc3ea058f7094f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qw6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-686d5f9fbd-vv9rc_openstack-operators(d7288cc6-4247-4d03-bd37-9862243bf613): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:28:30 crc kubenswrapper[4778]: E0312 13:28:30.687332 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" podUID="d7288cc6-4247-4d03-bd37-9862243bf613" Mar 12 13:28:30 crc kubenswrapper[4778]: I0312 13:28:30.984541 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw"] Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.124917 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz"] Mar 12 13:28:31 crc kubenswrapper[4778]: W0312 13:28:31.138907 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02bc06ca_f4e6_4fde_bd5d_882714d9652c.slice/crio-f4773d1227aea67668cd4cfcd9007b1984af5d83edd8146ec84d606804c93441 WatchSource:0}: Error finding container f4773d1227aea67668cd4cfcd9007b1984af5d83edd8146ec84d606804c93441: Status 404 returned error can't find the container with id f4773d1227aea67668cd4cfcd9007b1984af5d83edd8146ec84d606804c93441 Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.248638 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6"] Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.323953 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" event={"ID":"20d587ee-7b57-4b99-a800-c6d46322d799","Type":"ContainerStarted","Data":"c06e4e1b6c58e04407e154a6eb32ce96d2dfbf0e7e2f81409f2e784cc2f29542"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.325685 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" event={"ID":"db7f6b97-2903-44bf-803f-c00c337400b9","Type":"ContainerStarted","Data":"a5bf5ec21d4065da295f00d28a5287cd362a01c304f74d7e5512faa219b3d7de"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.325866 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.330324 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" event={"ID":"ed9b9271-4ae9-440a-9411-15d46267106e","Type":"ContainerStarted","Data":"7b17321aaf7cd993b288c637ad0612dc4c574dab465fc6e4b6f72db4a3b80c18"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.330389 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.331512 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" event={"ID":"4f7d316e-6896-4f84-8423-6f79778c1c6b","Type":"ContainerStarted","Data":"adb2d40e89caf9534d5c4dabf505feb3801f9b44831b0c55adf126795bfad3bd"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.333097 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" event={"ID":"02bc06ca-f4e6-4fde-bd5d-882714d9652c","Type":"ContainerStarted","Data":"f4773d1227aea67668cd4cfcd9007b1984af5d83edd8146ec84d606804c93441"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.334418 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" event={"ID":"d0784623-5f08-4109-9c7e-0a329210ce07","Type":"ContainerStarted","Data":"d33410a9882018e414468638661548551ebef7d157616bca16c7e7be28aced14"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.334441 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" event={"ID":"d0784623-5f08-4109-9c7e-0a329210ce07","Type":"ContainerStarted","Data":"5f01fcfb5d6c209379adeb47c56eb146cd7d15d24ce620cacec56510e790e55e"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.335067 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.336503 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" event={"ID":"ffb8a1f4-4533-4368-a900-95d37fe1d3ad","Type":"ContainerStarted","Data":"7bb3ac22f2df17855f0d4f1ce9050828bca873c51e045b0423f43e2e9b38806c"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.336999 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.338259 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" event={"ID":"2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f","Type":"ContainerStarted","Data":"0502ba772b336b3388415a546413a2b0363d7512fe9d7157ee6e2c17da947864"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.338589 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.340842 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" event={"ID":"64a36384-f2e6-4077-b2ca-de2a6ce6ea06","Type":"ContainerStarted","Data":"d44f9cc5c66a90f293068c46be5aaf2887ded95f7ed413336a8e90b5043e9a57"} Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.340866 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" Mar 12 13:28:31 crc kubenswrapper[4778]: E0312 13:28:31.341993 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" podUID="7e02c37f-b9af-46c9-a743-03ead9b060db" Mar 12 13:28:31 crc kubenswrapper[4778]: E0312 13:28:31.342373 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.97:5001/openstack-k8s-operators/nova-operator:8734adf928be66fa1f808466edcc3ea058f7094f\\\"\"" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" podUID="d7288cc6-4247-4d03-bd37-9862243bf613" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.351092 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" podStartSLOduration=2.380480556 podStartE2EDuration="31.351070239s" podCreationTimestamp="2026-03-12 13:28:00 +0000 UTC" firstStartedPulling="2026-03-12 13:28:01.70172612 +0000 UTC m=+1100.150421506" lastFinishedPulling="2026-03-12 13:28:30.672315793 +0000 UTC m=+1129.121011189" observedRunningTime="2026-03-12 13:28:31.344244494 +0000 UTC m=+1129.792939910" watchObservedRunningTime="2026-03-12 13:28:31.351070239 +0000 UTC m=+1129.799765635" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.363459 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" podStartSLOduration=4.6482561 podStartE2EDuration="44.363439491s" podCreationTimestamp="2026-03-12 13:27:47 +0000 UTC" firstStartedPulling="2026-03-12 13:27:50.957973946 +0000 UTC m=+1089.406669342" lastFinishedPulling="2026-03-12 13:28:30.673157337 +0000 UTC m=+1129.121852733" observedRunningTime="2026-03-12 13:28:31.362469623 +0000 UTC m=+1129.811165029" watchObservedRunningTime="2026-03-12 13:28:31.363439491 +0000 UTC m=+1129.812134877" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.410713 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" podStartSLOduration=7.685833703 podStartE2EDuration="42.410698728s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:54.894567489 +0000 UTC m=+1093.343262885" lastFinishedPulling="2026-03-12 13:28:29.619432514 +0000 UTC m=+1128.068127910" observedRunningTime="2026-03-12 13:28:31.409099333 +0000 UTC m=+1129.857794729" watchObservedRunningTime="2026-03-12 13:28:31.410698728 +0000 UTC m=+1129.859394124" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.843075 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" podStartSLOduration=41.843051531 podStartE2EDuration="41.843051531s" podCreationTimestamp="2026-03-12 13:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:28:31.495383402 +0000 UTC m=+1129.944078798" watchObservedRunningTime="2026-03-12 13:28:31.843051531 +0000 UTC m=+1130.291746927" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.843609 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" podStartSLOduration=6.6207949379999995 podStartE2EDuration="43.843600507s" podCreationTimestamp="2026-03-12 13:27:48 +0000 UTC" firstStartedPulling="2026-03-12 13:27:52.858774027 +0000 UTC m=+1091.307469423" lastFinishedPulling="2026-03-12 13:28:30.081579596 +0000 UTC m=+1128.530274992" observedRunningTime="2026-03-12 13:28:31.840835018 +0000 UTC m=+1130.289530424" watchObservedRunningTime="2026-03-12 13:28:31.843600507 +0000 UTC m=+1130.292295913" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.860267 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" podStartSLOduration=7.530400835 podStartE2EDuration="42.860244441s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:55.243491156 +0000 UTC m=+1093.692186552" lastFinishedPulling="2026-03-12 13:28:30.573334762 +0000 UTC m=+1129.022030158" observedRunningTime="2026-03-12 13:28:31.858392948 +0000 UTC m=+1130.307088334" watchObservedRunningTime="2026-03-12 13:28:31.860244441 +0000 UTC m=+1130.308939837" Mar 12 13:28:31 crc kubenswrapper[4778]: I0312 13:28:31.876867 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" podStartSLOduration=8.115836251 podStartE2EDuration="42.876846804s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:55.319701698 +0000 UTC m=+1093.768397094" lastFinishedPulling="2026-03-12 13:28:30.080712251 +0000 UTC m=+1128.529407647" observedRunningTime="2026-03-12 13:28:31.875323171 +0000 UTC m=+1130.324018587" watchObservedRunningTime="2026-03-12 13:28:31.876846804 +0000 UTC m=+1130.325542200" Mar 12 13:28:32 crc kubenswrapper[4778]: I0312 13:28:32.349278 4778 generic.go:334] "Generic (PLEG): container finished" podID="20d587ee-7b57-4b99-a800-c6d46322d799" containerID="c06e4e1b6c58e04407e154a6eb32ce96d2dfbf0e7e2f81409f2e784cc2f29542" exitCode=0 Mar 12 13:28:32 crc kubenswrapper[4778]: I0312 13:28:32.351374 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" event={"ID":"20d587ee-7b57-4b99-a800-c6d46322d799","Type":"ContainerDied","Data":"c06e4e1b6c58e04407e154a6eb32ce96d2dfbf0e7e2f81409f2e784cc2f29542"} Mar 12 13:28:34 crc kubenswrapper[4778]: I0312 13:28:34.302596 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" Mar 12 13:28:34 crc kubenswrapper[4778]: I0312 13:28:34.365141 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" event={"ID":"20d587ee-7b57-4b99-a800-c6d46322d799","Type":"ContainerDied","Data":"de84ea928c21e7d851428ee60843720d656101db59c03c12935d28595c4b6525"} Mar 12 13:28:34 crc kubenswrapper[4778]: I0312 13:28:34.365204 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de84ea928c21e7d851428ee60843720d656101db59c03c12935d28595c4b6525" Mar 12 13:28:34 crc kubenswrapper[4778]: I0312 13:28:34.365271 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555368-d2cpg" Mar 12 13:28:34 crc kubenswrapper[4778]: I0312 13:28:34.415723 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555362-hlj7f"] Mar 12 13:28:34 crc kubenswrapper[4778]: I0312 13:28:34.421133 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555362-hlj7f"] Mar 12 13:28:34 crc kubenswrapper[4778]: I0312 13:28:34.421147 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m2kn\" (UniqueName: \"kubernetes.io/projected/20d587ee-7b57-4b99-a800-c6d46322d799-kube-api-access-9m2kn\") pod \"20d587ee-7b57-4b99-a800-c6d46322d799\" (UID: \"20d587ee-7b57-4b99-a800-c6d46322d799\") " Mar 12 13:28:34 crc kubenswrapper[4778]: I0312 13:28:34.939048 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d587ee-7b57-4b99-a800-c6d46322d799-kube-api-access-9m2kn" (OuterVolumeSpecName: "kube-api-access-9m2kn") pod "20d587ee-7b57-4b99-a800-c6d46322d799" (UID: "20d587ee-7b57-4b99-a800-c6d46322d799"). InnerVolumeSpecName "kube-api-access-9m2kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:28:35 crc kubenswrapper[4778]: I0312 13:28:35.039659 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m2kn\" (UniqueName: \"kubernetes.io/projected/20d587ee-7b57-4b99-a800-c6d46322d799-kube-api-access-9m2kn\") on node \"crc\" DevicePath \"\"" Mar 12 13:28:36 crc kubenswrapper[4778]: I0312 13:28:36.278068 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b" path="/var/lib/kubelet/pods/9da11ea2-3173-4f25-8f0e-3ccc5a0ca18b/volumes" Mar 12 13:28:38 crc kubenswrapper[4778]: I0312 13:28:38.473039 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-6h2c2" Mar 12 13:28:39 crc kubenswrapper[4778]: I0312 13:28:39.252731 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-gknp2" Mar 12 13:28:40 crc kubenswrapper[4778]: I0312 13:28:40.282684 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-jlbft" Mar 12 13:28:41 crc kubenswrapper[4778]: I0312 13:28:41.075537 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-84mps" Mar 12 13:28:41 crc kubenswrapper[4778]: I0312 13:28:41.538839 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-pcfrz" Mar 12 13:28:43 crc kubenswrapper[4778]: I0312 13:28:43.763864 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5785b7957-7vdgw" Mar 12 13:28:47 crc kubenswrapper[4778]: I0312 13:28:47.493706 4778 scope.go:117] "RemoveContainer" containerID="1c6932f83080c12204b2bc10f63ca97fbee0fb238358dc69be9a27d4fc46a8a5" Mar 12 13:28:58 crc kubenswrapper[4778]: I0312 13:28:58.557718 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:28:58 crc kubenswrapper[4778]: I0312 13:28:58.558383 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:29:41 crc kubenswrapper[4778]: I0312 13:29:28.558090 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:29:41 crc kubenswrapper[4778]: I0312 13:29:28.559154 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:29:41 crc kubenswrapper[4778]: I0312 13:29:28.559255 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:29:41 crc kubenswrapper[4778]: I0312 13:29:28.560273 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b4b372cac8f288fc2585670d5ab7c00c41331f173130d39b164aa74e4e3e398"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:29:41 crc kubenswrapper[4778]: I0312 13:29:28.560337 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://3b4b372cac8f288fc2585670d5ab7c00c41331f173130d39b164aa74e4e3e398" gracePeriod=600 Mar 12 13:29:41 crc kubenswrapper[4778]: E0312 13:29:41.546272 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:417a4ede6dce5d088ce7dc1ac6e9dab30f3b532bd5b506e2df65d6eaecbc7cb9" Mar 12 13:29:41 crc kubenswrapper[4778]: E0312 13:29:41.547491 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:417a4ede6dce5d088ce7dc1ac6e9dab30f3b532bd5b506e2df65d6eaecbc7cb9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpx49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-5995f4446f-5d6qz_openstack-operators(02bc06ca-f4e6-4fde-bd5d-882714d9652c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:29:41 crc kubenswrapper[4778]: E0312 13:29:41.548661 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" podUID="02bc06ca-f4e6-4fde-bd5d-882714d9652c" Mar 12 13:29:52 crc kubenswrapper[4778]: I0312 13:29:42.446103 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"3b4b372cac8f288fc2585670d5ab7c00c41331f173130d39b164aa74e4e3e398"} Mar 12 13:29:52 crc kubenswrapper[4778]: I0312 13:29:42.446136 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="3b4b372cac8f288fc2585670d5ab7c00c41331f173130d39b164aa74e4e3e398" exitCode=0 Mar 12 13:29:52 crc kubenswrapper[4778]: I0312 13:29:42.446212 4778 scope.go:117] "RemoveContainer" containerID="b65e287d42eea6146877a35b0789c26ac0ef9f5d251a760b59f08b3fef055d65" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.154448 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f"] Mar 12 13:30:00 crc kubenswrapper[4778]: E0312 13:30:00.155394 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d587ee-7b57-4b99-a800-c6d46322d799" containerName="oc" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.155412 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d587ee-7b57-4b99-a800-c6d46322d799" containerName="oc" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.155586 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d587ee-7b57-4b99-a800-c6d46322d799" containerName="oc" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.156209 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.159769 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.161594 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.164790 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555370-6zrgd"] Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.165926 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555370-6zrgd" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.168539 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.168718 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.168730 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.171928 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555370-6zrgd"] Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.178716 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f"] Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.281469 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ks2n\" (UniqueName: \"kubernetes.io/projected/1c682acb-240b-44d4-a2be-0ea0cd913af1-kube-api-access-9ks2n\") pod \"auto-csr-approver-29555370-6zrgd\" (UID: \"1c682acb-240b-44d4-a2be-0ea0cd913af1\") " pod="openshift-infra/auto-csr-approver-29555370-6zrgd" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.281725 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xsfh\" (UniqueName: \"kubernetes.io/projected/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-kube-api-access-4xsfh\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.281830 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-secret-volume\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.281992 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-config-volume\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.383545 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-config-volume\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.383930 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ks2n\" (UniqueName: \"kubernetes.io/projected/1c682acb-240b-44d4-a2be-0ea0cd913af1-kube-api-access-9ks2n\") pod \"auto-csr-approver-29555370-6zrgd\" (UID: \"1c682acb-240b-44d4-a2be-0ea0cd913af1\") " pod="openshift-infra/auto-csr-approver-29555370-6zrgd" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.384049 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xsfh\" (UniqueName: \"kubernetes.io/projected/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-kube-api-access-4xsfh\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.384201 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-secret-volume\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.384882 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-config-volume\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.391637 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-secret-volume\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.404348 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ks2n\" (UniqueName: \"kubernetes.io/projected/1c682acb-240b-44d4-a2be-0ea0cd913af1-kube-api-access-9ks2n\") pod \"auto-csr-approver-29555370-6zrgd\" (UID: \"1c682acb-240b-44d4-a2be-0ea0cd913af1\") " pod="openshift-infra/auto-csr-approver-29555370-6zrgd" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.413718 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xsfh\" (UniqueName: \"kubernetes.io/projected/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-kube-api-access-4xsfh\") pod \"collect-profiles-29555370-zcp5f\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.491919 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:00 crc kubenswrapper[4778]: I0312 13:30:00.504242 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555370-6zrgd" Mar 12 13:30:10 crc kubenswrapper[4778]: I0312 13:30:10.704251 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f"] Mar 12 13:30:10 crc kubenswrapper[4778]: I0312 13:30:10.789435 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555370-6zrgd"] Mar 12 13:30:10 crc kubenswrapper[4778]: W0312 13:30:10.804559 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf03685_d980_41f0_bbc5_84b9ae0ce1df.slice/crio-1a5e91c9d02c7de2c4009f12029cabe072d8d04d8718617498c0566b1d22e0df WatchSource:0}: Error finding container 1a5e91c9d02c7de2c4009f12029cabe072d8d04d8718617498c0566b1d22e0df: Status 404 returned error can't find the container with id 1a5e91c9d02c7de2c4009f12029cabe072d8d04d8718617498c0566b1d22e0df Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.688486 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" event={"ID":"8bf03685-d980-41f0-bbc5-84b9ae0ce1df","Type":"ContainerStarted","Data":"fa067a709ad1af5d5b9327929891ffc04839dd2d8aba3cc70c48dbfeabd353b9"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.689056 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" event={"ID":"8bf03685-d980-41f0-bbc5-84b9ae0ce1df","Type":"ContainerStarted","Data":"1a5e91c9d02c7de2c4009f12029cabe072d8d04d8718617498c0566b1d22e0df"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.708428 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" event={"ID":"1a01d06c-be6f-45de-a22d-c8f1058a3a84","Type":"ContainerStarted","Data":"682b989352e7cb03a00ffe09b099d2da7b43ddf31a4c0e296a43178b62c7b528"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.709056 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.717994 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" podStartSLOduration=11.717976851 podStartE2EDuration="11.717976851s" podCreationTimestamp="2026-03-12 13:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:30:11.716300803 +0000 UTC m=+1230.164996199" watchObservedRunningTime="2026-03-12 13:30:11.717976851 +0000 UTC m=+1230.166672247" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.738431 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" event={"ID":"5e38a4fd-95f8-437b-923b-eca33b1387e6","Type":"ContainerStarted","Data":"a82c48c3a9e4df5db23c33aa9590f8e780acf018ad62dbbe99c04313660852fc"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.739077 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.773803 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" event={"ID":"8d38fd7e-6fa1-4b0c-9c82-9c57290c7837","Type":"ContainerStarted","Data":"d13442b4e35e6c42e7a2d19c65808770c225b101c756ff5c9b92cd147f000999"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.774792 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.790433 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"572aad6c3b1a3f7c9ef45b8b4feb0d367e7e7916d0ab8dd064e2b8ac87268c51"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.797915 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" podStartSLOduration=7.417947677 podStartE2EDuration="2m22.797891828s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:54.552225511 +0000 UTC m=+1093.000920897" lastFinishedPulling="2026-03-12 13:30:09.932169612 +0000 UTC m=+1228.380865048" observedRunningTime="2026-03-12 13:30:11.74356703 +0000 UTC m=+1230.192262426" watchObservedRunningTime="2026-03-12 13:30:11.797891828 +0000 UTC m=+1230.246587224" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.799610 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" podStartSLOduration=7.095459163 podStartE2EDuration="2m22.799602367s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:54.105256569 +0000 UTC m=+1092.553951965" lastFinishedPulling="2026-03-12 13:30:09.809399763 +0000 UTC m=+1228.258095169" observedRunningTime="2026-03-12 13:30:11.785351831 +0000 UTC m=+1230.234047237" watchObservedRunningTime="2026-03-12 13:30:11.799602367 +0000 UTC m=+1230.248297773" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.809233 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" event={"ID":"4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71","Type":"ContainerStarted","Data":"944e9b4f5593188c7d0547feee17bc4fdf087537a72c1068daf9af76b28418a3"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.809967 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.819036 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" event={"ID":"52524252-25bd-49e5-822e-3d4668aff2f9","Type":"ContainerStarted","Data":"1dedc528009aad1b174fe4409306d7b650583ab562c684d3e834b3887a83ce9c"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.821142 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" event={"ID":"076835c9-352b-4e40-80c4-3bce3bb80594","Type":"ContainerStarted","Data":"9573cdd27cbfe1c6f23115e81f3555e406dcbbe6c3b3fbf03eb74d488e7d2632"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.821214 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" podStartSLOduration=7.42015443 podStartE2EDuration="2m22.820858153s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:54.851175811 +0000 UTC m=+1093.299871217" lastFinishedPulling="2026-03-12 13:30:10.251879554 +0000 UTC m=+1228.700574940" observedRunningTime="2026-03-12 13:30:11.816336934 +0000 UTC m=+1230.265032330" watchObservedRunningTime="2026-03-12 13:30:11.820858153 +0000 UTC m=+1230.269553549" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.821587 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.822197 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" event={"ID":"98a4cfbd-3037-48b5-9047-5d574dcc0aca","Type":"ContainerStarted","Data":"fe9764764913797541ce93765c0193b347c66bf42c36d8f4e18119306c3ba418"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.825595 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.834990 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" event={"ID":"02bc06ca-f4e6-4fde-bd5d-882714d9652c","Type":"ContainerStarted","Data":"18af05ec7a686ea1e7d1086f5ea04254d6c6ca199fe137a46915b1da7f1fc180"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.835204 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.835901 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555370-6zrgd" event={"ID":"1c682acb-240b-44d4-a2be-0ea0cd913af1","Type":"ContainerStarted","Data":"623f846d223ee24a6cba599fe831dbd5bf60da2ddff32e7acc90057e22b71876"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.839820 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" podStartSLOduration=7.1148405 podStartE2EDuration="2m23.839806703s" podCreationTimestamp="2026-03-12 13:27:48 +0000 UTC" firstStartedPulling="2026-03-12 13:27:52.271251694 +0000 UTC m=+1090.719947090" lastFinishedPulling="2026-03-12 13:30:08.996217897 +0000 UTC m=+1227.444913293" observedRunningTime="2026-03-12 13:30:11.834753529 +0000 UTC m=+1230.283448925" watchObservedRunningTime="2026-03-12 13:30:11.839806703 +0000 UTC m=+1230.288502089" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.845448 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" event={"ID":"034f39d8-a33e-4e37-bcde-51fb22debdd1","Type":"ContainerStarted","Data":"07692534a3d4f2d2a10f9b6c0d2df8a72199cae2af3442e8e487917a6698768b"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.867890 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" event={"ID":"c8818ac0-af8b-42c9-a923-425fe79ed203","Type":"ContainerStarted","Data":"94e31167ba37ebbdf9a654d9e583c8e77901521d8cacc623858135237eaafcb5"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.868851 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.897849 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" event={"ID":"8c02ecb8-0e15-4672-823a-c4437ca5bf8c","Type":"ContainerStarted","Data":"1bcf3d1a36b393c0d7a06b92fe96b249521ebb9c61ddf71fe57200c7ee9a1a3f"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.898805 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.901993 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" podStartSLOduration=44.6801485 podStartE2EDuration="2m23.901974915s" podCreationTimestamp="2026-03-12 13:27:48 +0000 UTC" firstStartedPulling="2026-03-12 13:28:31.14586455 +0000 UTC m=+1129.594559966" lastFinishedPulling="2026-03-12 13:30:10.367690965 +0000 UTC m=+1228.816386381" observedRunningTime="2026-03-12 13:30:11.899280068 +0000 UTC m=+1230.347975464" watchObservedRunningTime="2026-03-12 13:30:11.901974915 +0000 UTC m=+1230.350670311" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.943446 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" event={"ID":"6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c","Type":"ContainerStarted","Data":"e9665833f856db66cbdf9ceafdbd4ac9eb165892b060faf205d90b6dd7e9f1c7"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.943732 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.967403 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" event={"ID":"d7288cc6-4247-4d03-bd37-9862243bf613","Type":"ContainerStarted","Data":"3fc68162292e71c8fc5f9cbe96d0444169057f2d6a017a0fc30ef68581d2f893"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.968023 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.974040 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" podStartSLOduration=6.6399989999999995 podStartE2EDuration="2m22.974021138s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:54.037977 +0000 UTC m=+1092.486672396" lastFinishedPulling="2026-03-12 13:30:10.371999128 +0000 UTC m=+1228.820694534" observedRunningTime="2026-03-12 13:30:11.943981502 +0000 UTC m=+1230.392676898" watchObservedRunningTime="2026-03-12 13:30:11.974021138 +0000 UTC m=+1230.422716534" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.980727 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" podStartSLOduration=7.901207821 podStartE2EDuration="2m22.980698559s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:54.851607513 +0000 UTC m=+1093.300302909" lastFinishedPulling="2026-03-12 13:30:09.931098251 +0000 UTC m=+1228.379793647" observedRunningTime="2026-03-12 13:30:11.972471884 +0000 UTC m=+1230.421167280" watchObservedRunningTime="2026-03-12 13:30:11.980698559 +0000 UTC m=+1230.429393955" Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.991872 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" event={"ID":"ad531191-d7c5-4ef6-9929-3a5869751d98","Type":"ContainerStarted","Data":"608055fcdf6f010cf5a045ff56b1d3ff4a839075052dae92af32dd512d1a10a8"} Mar 12 13:30:11 crc kubenswrapper[4778]: I0312 13:30:11.993121 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.003509 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" event={"ID":"7e02c37f-b9af-46c9-a743-03ead9b060db","Type":"ContainerStarted","Data":"3070f4c0343eea3d825bef783dce2823c5ca2fb514d1402fcf5f63c0219e7ac7"} Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.004345 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.007608 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-shf7b" podStartSLOduration=5.980794323 podStartE2EDuration="2m21.007573745s" podCreationTimestamp="2026-03-12 13:27:51 +0000 UTC" firstStartedPulling="2026-03-12 13:27:55.342519039 +0000 UTC m=+1093.791214435" lastFinishedPulling="2026-03-12 13:30:10.369298461 +0000 UTC m=+1228.817993857" observedRunningTime="2026-03-12 13:30:12.000832523 +0000 UTC m=+1230.449527929" watchObservedRunningTime="2026-03-12 13:30:12.007573745 +0000 UTC m=+1230.456269141" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.023415 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" event={"ID":"4f7d316e-6896-4f84-8423-6f79778c1c6b","Type":"ContainerStarted","Data":"49e66e3e82cd4589f6d0c32156c3c77fcc2471f8841861dbcbab5a2ef92bd057"} Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.024168 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.064808 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" event={"ID":"e290c1ea-a39d-451e-a24b-17a2b61ff6f0","Type":"ContainerStarted","Data":"ba762064366842b2ae297d67ef6885f977d2d867bb935f92964f677eda3bc8b1"} Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.065845 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.094465 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" podStartSLOduration=6.763406299 podStartE2EDuration="2m23.094423731s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:54.038578297 +0000 UTC m=+1092.487273703" lastFinishedPulling="2026-03-12 13:30:10.369595729 +0000 UTC m=+1228.818291135" observedRunningTime="2026-03-12 13:30:12.087547455 +0000 UTC m=+1230.536242851" watchObservedRunningTime="2026-03-12 13:30:12.094423731 +0000 UTC m=+1230.543119117" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.144856 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" podStartSLOduration=7.684919612 podStartE2EDuration="2m24.144831707s" podCreationTimestamp="2026-03-12 13:27:48 +0000 UTC" firstStartedPulling="2026-03-12 13:27:53.349509898 +0000 UTC m=+1091.798205294" lastFinishedPulling="2026-03-12 13:30:09.809421983 +0000 UTC m=+1228.258117389" observedRunningTime="2026-03-12 13:30:12.049661004 +0000 UTC m=+1230.498356400" watchObservedRunningTime="2026-03-12 13:30:12.144831707 +0000 UTC m=+1230.593527103" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.148967 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" podStartSLOduration=7.618969517 podStartE2EDuration="2m23.148952474s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:27:54.897952645 +0000 UTC m=+1093.346648031" lastFinishedPulling="2026-03-12 13:30:10.427935602 +0000 UTC m=+1228.876630988" observedRunningTime="2026-03-12 13:30:12.118695202 +0000 UTC m=+1230.567390588" watchObservedRunningTime="2026-03-12 13:30:12.148952474 +0000 UTC m=+1230.597647870" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.169485 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" podStartSLOduration=7.505351563 podStartE2EDuration="2m22.169453929s" podCreationTimestamp="2026-03-12 13:27:50 +0000 UTC" firstStartedPulling="2026-03-12 13:27:55.267968503 +0000 UTC m=+1093.716663899" lastFinishedPulling="2026-03-12 13:30:09.932070829 +0000 UTC m=+1228.380766265" observedRunningTime="2026-03-12 13:30:12.14527384 +0000 UTC m=+1230.593969256" watchObservedRunningTime="2026-03-12 13:30:12.169453929 +0000 UTC m=+1230.618149395" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.207364 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" podStartSLOduration=44.898559297 podStartE2EDuration="2m23.207344289s" podCreationTimestamp="2026-03-12 13:27:49 +0000 UTC" firstStartedPulling="2026-03-12 13:28:31.264531902 +0000 UTC m=+1129.713227298" lastFinishedPulling="2026-03-12 13:30:09.573316884 +0000 UTC m=+1228.022012290" observedRunningTime="2026-03-12 13:30:12.203229771 +0000 UTC m=+1230.651925157" watchObservedRunningTime="2026-03-12 13:30:12.207344289 +0000 UTC m=+1230.656039685" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.226410 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" podStartSLOduration=6.955253243 podStartE2EDuration="2m25.226389281s" podCreationTimestamp="2026-03-12 13:27:47 +0000 UTC" firstStartedPulling="2026-03-12 13:27:51.660989343 +0000 UTC m=+1090.109684749" lastFinishedPulling="2026-03-12 13:30:09.932125351 +0000 UTC m=+1228.380820787" observedRunningTime="2026-03-12 13:30:12.222271114 +0000 UTC m=+1230.670966510" watchObservedRunningTime="2026-03-12 13:30:12.226389281 +0000 UTC m=+1230.675084677" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.257772 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" podStartSLOduration=8.54365231 podStartE2EDuration="2m25.257756855s" podCreationTimestamp="2026-03-12 13:27:47 +0000 UTC" firstStartedPulling="2026-03-12 13:27:52.859319182 +0000 UTC m=+1091.308014578" lastFinishedPulling="2026-03-12 13:30:09.573423717 +0000 UTC m=+1228.022119123" observedRunningTime="2026-03-12 13:30:12.252787864 +0000 UTC m=+1230.701483260" watchObservedRunningTime="2026-03-12 13:30:12.257756855 +0000 UTC m=+1230.706452251" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.305632 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" podStartSLOduration=23.456396545 podStartE2EDuration="2m24.305611239s" podCreationTimestamp="2026-03-12 13:27:48 +0000 UTC" firstStartedPulling="2026-03-12 13:27:51.660382316 +0000 UTC m=+1090.109077712" lastFinishedPulling="2026-03-12 13:29:52.50959699 +0000 UTC m=+1210.958292406" observedRunningTime="2026-03-12 13:30:12.29861577 +0000 UTC m=+1230.747311166" watchObservedRunningTime="2026-03-12 13:30:12.305611239 +0000 UTC m=+1230.754306635" Mar 12 13:30:12 crc kubenswrapper[4778]: I0312 13:30:12.307940 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" podStartSLOduration=7.948900162 podStartE2EDuration="2m24.307927365s" podCreationTimestamp="2026-03-12 13:27:48 +0000 UTC" firstStartedPulling="2026-03-12 13:27:53.997457803 +0000 UTC m=+1092.446153199" lastFinishedPulling="2026-03-12 13:30:10.356485006 +0000 UTC m=+1228.805180402" observedRunningTime="2026-03-12 13:30:12.275092039 +0000 UTC m=+1230.723787435" watchObservedRunningTime="2026-03-12 13:30:12.307927365 +0000 UTC m=+1230.756622771" Mar 12 13:30:13 crc kubenswrapper[4778]: I0312 13:30:13.075209 4778 generic.go:334] "Generic (PLEG): container finished" podID="8bf03685-d980-41f0-bbc5-84b9ae0ce1df" containerID="fa067a709ad1af5d5b9327929891ffc04839dd2d8aba3cc70c48dbfeabd353b9" exitCode=0 Mar 12 13:30:13 crc kubenswrapper[4778]: I0312 13:30:13.075295 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" event={"ID":"8bf03685-d980-41f0-bbc5-84b9ae0ce1df","Type":"ContainerDied","Data":"fa067a709ad1af5d5b9327929891ffc04839dd2d8aba3cc70c48dbfeabd353b9"} Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.312711 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.411379 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xsfh\" (UniqueName: \"kubernetes.io/projected/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-kube-api-access-4xsfh\") pod \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.412548 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-secret-volume\") pod \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.413073 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-config-volume\") pod \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\" (UID: \"8bf03685-d980-41f0-bbc5-84b9ae0ce1df\") " Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.413671 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-config-volume" (OuterVolumeSpecName: "config-volume") pod "8bf03685-d980-41f0-bbc5-84b9ae0ce1df" (UID: "8bf03685-d980-41f0-bbc5-84b9ae0ce1df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.414440 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.419528 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-kube-api-access-4xsfh" (OuterVolumeSpecName: "kube-api-access-4xsfh") pod "8bf03685-d980-41f0-bbc5-84b9ae0ce1df" (UID: "8bf03685-d980-41f0-bbc5-84b9ae0ce1df"). InnerVolumeSpecName "kube-api-access-4xsfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.420163 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8bf03685-d980-41f0-bbc5-84b9ae0ce1df" (UID: "8bf03685-d980-41f0-bbc5-84b9ae0ce1df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.516247 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:30:14 crc kubenswrapper[4778]: I0312 13:30:14.516281 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xsfh\" (UniqueName: \"kubernetes.io/projected/8bf03685-d980-41f0-bbc5-84b9ae0ce1df-kube-api-access-4xsfh\") on node \"crc\" DevicePath \"\"" Mar 12 13:30:15 crc kubenswrapper[4778]: I0312 13:30:15.090963 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" event={"ID":"8bf03685-d980-41f0-bbc5-84b9ae0ce1df","Type":"ContainerDied","Data":"1a5e91c9d02c7de2c4009f12029cabe072d8d04d8718617498c0566b1d22e0df"} Mar 12 13:30:15 crc kubenswrapper[4778]: I0312 13:30:15.091049 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a5e91c9d02c7de2c4009f12029cabe072d8d04d8718617498c0566b1d22e0df" Mar 12 13:30:15 crc kubenswrapper[4778]: I0312 13:30:15.091441 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f" Mar 12 13:30:16 crc kubenswrapper[4778]: I0312 13:30:16.099552 4778 generic.go:334] "Generic (PLEG): container finished" podID="1c682acb-240b-44d4-a2be-0ea0cd913af1" containerID="8328194fef169053b3f39722ffd3e2d940869363b5142050b8e768ed01fab0c0" exitCode=0 Mar 12 13:30:16 crc kubenswrapper[4778]: I0312 13:30:16.099617 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555370-6zrgd" event={"ID":"1c682acb-240b-44d4-a2be-0ea0cd913af1","Type":"ContainerDied","Data":"8328194fef169053b3f39722ffd3e2d940869363b5142050b8e768ed01fab0c0"} Mar 12 13:30:17 crc kubenswrapper[4778]: I0312 13:30:17.386883 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555370-6zrgd" Mar 12 13:30:17 crc kubenswrapper[4778]: I0312 13:30:17.560830 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ks2n\" (UniqueName: \"kubernetes.io/projected/1c682acb-240b-44d4-a2be-0ea0cd913af1-kube-api-access-9ks2n\") pod \"1c682acb-240b-44d4-a2be-0ea0cd913af1\" (UID: \"1c682acb-240b-44d4-a2be-0ea0cd913af1\") " Mar 12 13:30:17 crc kubenswrapper[4778]: I0312 13:30:17.569838 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c682acb-240b-44d4-a2be-0ea0cd913af1-kube-api-access-9ks2n" (OuterVolumeSpecName: "kube-api-access-9ks2n") pod "1c682acb-240b-44d4-a2be-0ea0cd913af1" (UID: "1c682acb-240b-44d4-a2be-0ea0cd913af1"). InnerVolumeSpecName "kube-api-access-9ks2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:30:17 crc kubenswrapper[4778]: I0312 13:30:17.662970 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ks2n\" (UniqueName: \"kubernetes.io/projected/1c682acb-240b-44d4-a2be-0ea0cd913af1-kube-api-access-9ks2n\") on node \"crc\" DevicePath \"\"" Mar 12 13:30:17 crc kubenswrapper[4778]: I0312 13:30:17.695760 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6" Mar 12 13:30:18 crc kubenswrapper[4778]: I0312 13:30:18.120551 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555370-6zrgd" event={"ID":"1c682acb-240b-44d4-a2be-0ea0cd913af1","Type":"ContainerDied","Data":"623f846d223ee24a6cba599fe831dbd5bf60da2ddff32e7acc90057e22b71876"} Mar 12 13:30:18 crc kubenswrapper[4778]: I0312 13:30:18.121179 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="623f846d223ee24a6cba599fe831dbd5bf60da2ddff32e7acc90057e22b71876" Mar 12 13:30:18 crc kubenswrapper[4778]: I0312 13:30:18.120652 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555370-6zrgd" Mar 12 13:30:18 crc kubenswrapper[4778]: I0312 13:30:18.455565 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555364-hrrdv"] Mar 12 13:30:18 crc kubenswrapper[4778]: I0312 13:30:18.461800 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555364-hrrdv"] Mar 12 13:30:18 crc kubenswrapper[4778]: I0312 13:30:18.530036 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-xm4cc" Mar 12 13:30:19 crc kubenswrapper[4778]: I0312 13:30:19.267607 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-9n6jv" Mar 12 13:30:19 crc kubenswrapper[4778]: I0312 13:30:19.377839 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-b7tkm" Mar 12 13:30:19 crc kubenswrapper[4778]: I0312 13:30:19.747443 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-4jgt8" Mar 12 13:30:19 crc kubenswrapper[4778]: I0312 13:30:19.990898 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-dd2ft" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.030293 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-686d5f9fbd-vv9rc" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.125593 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-qb8s8" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.261724 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c862c78c-5987-48cc-8b41-531755f319e9" path="/var/lib/kubelet/pods/c862c78c-5987-48cc-8b41-531755f319e9/volumes" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.282937 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.283009 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-7dxdh" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.376508 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-cdgg9" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.878793 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-bbgmb" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.979761 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" Mar 12 13:30:20 crc kubenswrapper[4778]: I0312 13:30:20.982739 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-wvpf8" Mar 12 13:30:21 crc kubenswrapper[4778]: I0312 13:30:21.091936 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-gfv5z" Mar 12 13:30:21 crc kubenswrapper[4778]: I0312 13:30:21.541175 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-2tjsk" Mar 12 13:30:25 crc kubenswrapper[4778]: I0312 13:30:25.167916 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-5d6qz" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.743140 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mcpvm"] Mar 12 13:30:41 crc kubenswrapper[4778]: E0312 13:30:41.744066 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c682acb-240b-44d4-a2be-0ea0cd913af1" containerName="oc" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.744078 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c682acb-240b-44d4-a2be-0ea0cd913af1" containerName="oc" Mar 12 13:30:41 crc kubenswrapper[4778]: E0312 13:30:41.744107 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf03685-d980-41f0-bbc5-84b9ae0ce1df" containerName="collect-profiles" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.744113 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf03685-d980-41f0-bbc5-84b9ae0ce1df" containerName="collect-profiles" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.744363 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf03685-d980-41f0-bbc5-84b9ae0ce1df" containerName="collect-profiles" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.744381 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c682acb-240b-44d4-a2be-0ea0cd913af1" containerName="oc" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.745463 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.753839 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.753862 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4pjqn" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.753963 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.753996 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.763980 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mcpvm"] Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.781028 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fr6p2"] Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.782537 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.785252 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.793435 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fr6p2"] Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.913736 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znmfx\" (UniqueName: \"kubernetes.io/projected/68c74642-7beb-4cb9-86bf-b12beafb4b68-kube-api-access-znmfx\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.913788 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.913821 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-config\") pod \"dnsmasq-dns-675f4bcbfc-mcpvm\" (UID: \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.913840 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-config\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:41 crc kubenswrapper[4778]: I0312 13:30:41.913963 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsbqf\" (UniqueName: \"kubernetes.io/projected/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-kube-api-access-hsbqf\") pod \"dnsmasq-dns-675f4bcbfc-mcpvm\" (UID: \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.015537 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znmfx\" (UniqueName: \"kubernetes.io/projected/68c74642-7beb-4cb9-86bf-b12beafb4b68-kube-api-access-znmfx\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.015627 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.015656 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-config\") pod \"dnsmasq-dns-675f4bcbfc-mcpvm\" (UID: \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.015674 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-config\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.015697 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsbqf\" (UniqueName: \"kubernetes.io/projected/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-kube-api-access-hsbqf\") pod \"dnsmasq-dns-675f4bcbfc-mcpvm\" (UID: \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.016710 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-config\") pod \"dnsmasq-dns-675f4bcbfc-mcpvm\" (UID: \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.016739 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-config\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.016768 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.033833 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsbqf\" (UniqueName: \"kubernetes.io/projected/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-kube-api-access-hsbqf\") pod \"dnsmasq-dns-675f4bcbfc-mcpvm\" (UID: \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.034318 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znmfx\" (UniqueName: \"kubernetes.io/projected/68c74642-7beb-4cb9-86bf-b12beafb4b68-kube-api-access-znmfx\") pod \"dnsmasq-dns-78dd6ddcc-fr6p2\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.070431 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.097546 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.328548 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mcpvm"] Mar 12 13:30:42 crc kubenswrapper[4778]: I0312 13:30:42.364916 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fr6p2"] Mar 12 13:30:42 crc kubenswrapper[4778]: W0312 13:30:42.380034 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68c74642_7beb_4cb9_86bf_b12beafb4b68.slice/crio-f62bb5eb4a941e7684e59f1fde389da442d607aaf8bb4fcbc4a589c4e0b98935 WatchSource:0}: Error finding container f62bb5eb4a941e7684e59f1fde389da442d607aaf8bb4fcbc4a589c4e0b98935: Status 404 returned error can't find the container with id f62bb5eb4a941e7684e59f1fde389da442d607aaf8bb4fcbc4a589c4e0b98935 Mar 12 13:30:43 crc kubenswrapper[4778]: I0312 13:30:43.325996 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" event={"ID":"68c74642-7beb-4cb9-86bf-b12beafb4b68","Type":"ContainerStarted","Data":"f62bb5eb4a941e7684e59f1fde389da442d607aaf8bb4fcbc4a589c4e0b98935"} Mar 12 13:30:43 crc kubenswrapper[4778]: I0312 13:30:43.327016 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" event={"ID":"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c","Type":"ContainerStarted","Data":"f54a79ac4265517d4761f6dc0e556ce29441b7767ec9275aa7bd3cc4d56d57eb"} Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.530628 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mcpvm"] Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.555905 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-d9gsf"] Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.557240 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.568083 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-d9gsf"] Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.651299 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-config\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.651361 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwddk\" (UniqueName: \"kubernetes.io/projected/78c6f209-08e0-4789-be6e-8c319547338c-kube-api-access-hwddk\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.651472 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.757139 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-config\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.757278 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwddk\" (UniqueName: \"kubernetes.io/projected/78c6f209-08e0-4789-be6e-8c319547338c-kube-api-access-hwddk\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.757382 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.758468 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.758551 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-config\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.784466 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwddk\" (UniqueName: \"kubernetes.io/projected/78c6f209-08e0-4789-be6e-8c319547338c-kube-api-access-hwddk\") pod \"dnsmasq-dns-5ccc8479f9-d9gsf\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.862261 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fr6p2"] Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.885370 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2p4pj"] Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.885659 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.891620 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:44 crc kubenswrapper[4778]: I0312 13:30:44.905651 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2p4pj"] Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.062960 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.063046 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-config\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.063214 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcntw\" (UniqueName: \"kubernetes.io/projected/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-kube-api-access-zcntw\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.164093 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.164174 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-config\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.164233 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcntw\" (UniqueName: \"kubernetes.io/projected/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-kube-api-access-zcntw\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.165242 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.165729 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-config\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.188164 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcntw\" (UniqueName: \"kubernetes.io/projected/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-kube-api-access-zcntw\") pod \"dnsmasq-dns-57d769cc4f-2p4pj\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.210909 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.433100 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-d9gsf"] Mar 12 13:30:45 crc kubenswrapper[4778]: W0312 13:30:45.444120 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c6f209_08e0_4789_be6e_8c319547338c.slice/crio-e3e2879145875855639170cdeac27dda0895e629f9eadf854b4f0adb8048db0a WatchSource:0}: Error finding container e3e2879145875855639170cdeac27dda0895e629f9eadf854b4f0adb8048db0a: Status 404 returned error can't find the container with id e3e2879145875855639170cdeac27dda0895e629f9eadf854b4f0adb8048db0a Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.698739 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2p4pj"] Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.703788 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.706197 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: W0312 13:30:45.706474 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc10b98ea_d832_471e_adb6_c22c4dbb0ab8.slice/crio-0799732deec1f1a8aef551ea0f0b4139ada27fd6ead6c91498a4273deb0bea7d WatchSource:0}: Error finding container 0799732deec1f1a8aef551ea0f0b4139ada27fd6ead6c91498a4273deb0bea7d: Status 404 returned error can't find the container with id 0799732deec1f1a8aef551ea0f0b4139ada27fd6ead6c91498a4273deb0bea7d Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.708118 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.710401 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.710441 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-n2fr8" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.710468 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.710570 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.710645 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.710756 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.714453 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874102 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874142 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874177 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874225 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcbq\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-kube-api-access-7kcbq\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874255 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874278 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874304 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874327 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874359 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874380 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.874404 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977328 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kcbq\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-kube-api-access-7kcbq\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977483 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977514 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977564 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977599 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977670 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977698 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977751 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977789 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977810 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.977857 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.978013 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.978283 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.978459 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.979773 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.980324 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.980543 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.984937 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.985046 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.994235 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.994319 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:45 crc kubenswrapper[4778]: I0312 13:30:45.997320 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kcbq\" (UniqueName: \"kubernetes.io/projected/629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03-kube-api-access-7kcbq\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.011655 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.013427 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.018594 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03\") " pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.018759 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.018956 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.019093 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.019296 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.019410 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.019505 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.019619 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-r9xhc" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.030784 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.037444 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184000 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184116 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184219 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184265 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e89dfcc-2ac3-444c-91e8-56991eae096b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184313 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-config-data\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184385 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e89dfcc-2ac3-444c-91e8-56991eae096b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184410 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184454 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184483 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184531 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.184552 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kxn5\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-kube-api-access-4kxn5\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.285922 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286261 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e89dfcc-2ac3-444c-91e8-56991eae096b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286297 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-config-data\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286322 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e89dfcc-2ac3-444c-91e8-56991eae096b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286343 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286369 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286396 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286416 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286433 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kxn5\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-kube-api-access-4kxn5\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286472 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.286495 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.287386 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.287670 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") device mount path \"/mnt/openstack/pv19\"" pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.299147 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.300410 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-config-data\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.301583 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.302148 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1e89dfcc-2ac3-444c-91e8-56991eae096b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.310279 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.310471 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1e89dfcc-2ac3-444c-91e8-56991eae096b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.316933 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1e89dfcc-2ac3-444c-91e8-56991eae096b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.320917 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.331237 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kxn5\" (UniqueName: \"kubernetes.io/projected/1e89dfcc-2ac3-444c-91e8-56991eae096b-kube-api-access-4kxn5\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.383549 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"rabbitmq-server-0\" (UID: \"1e89dfcc-2ac3-444c-91e8-56991eae096b\") " pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.402511 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" event={"ID":"78c6f209-08e0-4789-be6e-8c319547338c","Type":"ContainerStarted","Data":"e3e2879145875855639170cdeac27dda0895e629f9eadf854b4f0adb8048db0a"} Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.432060 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" event={"ID":"c10b98ea-d832-471e-adb6-c22c4dbb0ab8","Type":"ContainerStarted","Data":"0799732deec1f1a8aef551ea0f0b4139ada27fd6ead6c91498a4273deb0bea7d"} Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.670130 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 12 13:30:46 crc kubenswrapper[4778]: W0312 13:30:46.694605 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod629c84c5_e6cf_4aa7_ba9a_5a5fe7f53a03.slice/crio-4136242d28b5d751ac159a0788e9e237a903f60a0018c9e2a52d0ac52cd311ca WatchSource:0}: Error finding container 4136242d28b5d751ac159a0788e9e237a903f60a0018c9e2a52d0ac52cd311ca: Status 404 returned error can't find the container with id 4136242d28b5d751ac159a0788e9e237a903f60a0018c9e2a52d0ac52cd311ca Mar 12 13:30:46 crc kubenswrapper[4778]: I0312 13:30:46.695774 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.172661 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.353966 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.355354 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.359654 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.359927 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.360835 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-cj5pw" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.361033 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.365261 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.370955 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.474366 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1e89dfcc-2ac3-444c-91e8-56991eae096b","Type":"ContainerStarted","Data":"3c486631d5b69b991065345f5c9738213bf611e7c3f421f730777ed8f23e1701"} Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.477653 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03","Type":"ContainerStarted","Data":"4136242d28b5d751ac159a0788e9e237a903f60a0018c9e2a52d0ac52cd311ca"} Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.517709 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-kolla-config\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.518110 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/663feb48-0ed1-4947-97c3-e0bac206fdb2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.518167 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.518254 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-config-data-default\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.518291 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/663feb48-0ed1-4947-97c3-e0bac206fdb2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.518318 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rpt\" (UniqueName: \"kubernetes.io/projected/663feb48-0ed1-4947-97c3-e0bac206fdb2-kube-api-access-k9rpt\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.518344 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.518371 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663feb48-0ed1-4947-97c3-e0bac206fdb2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.625512 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.625588 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-config-data-default\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.625616 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/663feb48-0ed1-4947-97c3-e0bac206fdb2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.625636 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9rpt\" (UniqueName: \"kubernetes.io/projected/663feb48-0ed1-4947-97c3-e0bac206fdb2-kube-api-access-k9rpt\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.625655 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.625677 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663feb48-0ed1-4947-97c3-e0bac206fdb2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.625710 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-kolla-config\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.625740 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/663feb48-0ed1-4947-97c3-e0bac206fdb2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.627371 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/663feb48-0ed1-4947-97c3-e0bac206fdb2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.628631 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.630163 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-kolla-config\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.630998 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.632997 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/663feb48-0ed1-4947-97c3-e0bac206fdb2-config-data-default\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.640525 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663feb48-0ed1-4947-97c3-e0bac206fdb2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.645889 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/663feb48-0ed1-4947-97c3-e0bac206fdb2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.647434 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9rpt\" (UniqueName: \"kubernetes.io/projected/663feb48-0ed1-4947-97c3-e0bac206fdb2-kube-api-access-k9rpt\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.655772 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"663feb48-0ed1-4947-97c3-e0bac206fdb2\") " pod="openstack/openstack-galera-0" Mar 12 13:30:47 crc kubenswrapper[4778]: I0312 13:30:47.742608 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.330406 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.755760 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.757809 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.759921 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-79jqq" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.760143 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.760390 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.760420 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.792004 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.849103 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvkn6\" (UniqueName: \"kubernetes.io/projected/fe52f8ba-9053-4733-b2e3-8f1becf437c8-kube-api-access-hvkn6\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.849199 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.849230 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe52f8ba-9053-4733-b2e3-8f1becf437c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.849251 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe52f8ba-9053-4733-b2e3-8f1becf437c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.849288 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.849319 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.849342 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.849357 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe52f8ba-9053-4733-b2e3-8f1becf437c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.950880 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe52f8ba-9053-4733-b2e3-8f1becf437c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.950961 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.951014 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.951048 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.951062 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe52f8ba-9053-4733-b2e3-8f1becf437c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.951097 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvkn6\" (UniqueName: \"kubernetes.io/projected/fe52f8ba-9053-4733-b2e3-8f1becf437c8-kube-api-access-hvkn6\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.951148 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.951551 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe52f8ba-9053-4733-b2e3-8f1becf437c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.951878 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.952115 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe52f8ba-9053-4733-b2e3-8f1becf437c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.953325 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.953766 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.956241 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe52f8ba-9053-4733-b2e3-8f1becf437c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.957347 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe52f8ba-9053-4733-b2e3-8f1becf437c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.961794 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe52f8ba-9053-4733-b2e3-8f1becf437c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.976974 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvkn6\" (UniqueName: \"kubernetes.io/projected/fe52f8ba-9053-4733-b2e3-8f1becf437c8-kube-api-access-hvkn6\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:48 crc kubenswrapper[4778]: I0312 13:30:48.978705 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fe52f8ba-9053-4733-b2e3-8f1becf437c8\") " pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.080677 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.091367 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.096597 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.097839 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-jrgp8" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.097937 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.101144 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.103589 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.255700 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zj26\" (UniqueName: \"kubernetes.io/projected/ec63cc68-6fde-419b-973c-91fc982e6a49-kube-api-access-6zj26\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.255746 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec63cc68-6fde-419b-973c-91fc982e6a49-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.255811 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec63cc68-6fde-419b-973c-91fc982e6a49-config-data\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.255837 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec63cc68-6fde-419b-973c-91fc982e6a49-kolla-config\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.255856 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec63cc68-6fde-419b-973c-91fc982e6a49-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.357049 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec63cc68-6fde-419b-973c-91fc982e6a49-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.357141 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec63cc68-6fde-419b-973c-91fc982e6a49-config-data\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.357166 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec63cc68-6fde-419b-973c-91fc982e6a49-kolla-config\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.357195 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec63cc68-6fde-419b-973c-91fc982e6a49-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.357250 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zj26\" (UniqueName: \"kubernetes.io/projected/ec63cc68-6fde-419b-973c-91fc982e6a49-kube-api-access-6zj26\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.358364 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ec63cc68-6fde-419b-973c-91fc982e6a49-kolla-config\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.358831 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec63cc68-6fde-419b-973c-91fc982e6a49-config-data\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.363135 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec63cc68-6fde-419b-973c-91fc982e6a49-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.376421 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec63cc68-6fde-419b-973c-91fc982e6a49-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.384748 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zj26\" (UniqueName: \"kubernetes.io/projected/ec63cc68-6fde-419b-973c-91fc982e6a49-kube-api-access-6zj26\") pod \"memcached-0\" (UID: \"ec63cc68-6fde-419b-973c-91fc982e6a49\") " pod="openstack/memcached-0" Mar 12 13:30:49 crc kubenswrapper[4778]: I0312 13:30:49.428971 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 12 13:30:51 crc kubenswrapper[4778]: I0312 13:30:51.270019 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:30:51 crc kubenswrapper[4778]: I0312 13:30:51.273613 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 13:30:51 crc kubenswrapper[4778]: I0312 13:30:51.276560 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-skn4q" Mar 12 13:30:51 crc kubenswrapper[4778]: I0312 13:30:51.276700 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:30:51 crc kubenswrapper[4778]: I0312 13:30:51.396095 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8s49\" (UniqueName: \"kubernetes.io/projected/66ed2760-88a0-4731-a0d1-52cb6cffa2b1-kube-api-access-m8s49\") pod \"kube-state-metrics-0\" (UID: \"66ed2760-88a0-4731-a0d1-52cb6cffa2b1\") " pod="openstack/kube-state-metrics-0" Mar 12 13:30:51 crc kubenswrapper[4778]: I0312 13:30:51.498457 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8s49\" (UniqueName: \"kubernetes.io/projected/66ed2760-88a0-4731-a0d1-52cb6cffa2b1-kube-api-access-m8s49\") pod \"kube-state-metrics-0\" (UID: \"66ed2760-88a0-4731-a0d1-52cb6cffa2b1\") " pod="openstack/kube-state-metrics-0" Mar 12 13:30:51 crc kubenswrapper[4778]: I0312 13:30:51.530980 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8s49\" (UniqueName: \"kubernetes.io/projected/66ed2760-88a0-4731-a0d1-52cb6cffa2b1-kube-api-access-m8s49\") pod \"kube-state-metrics-0\" (UID: \"66ed2760-88a0-4731-a0d1-52cb6cffa2b1\") " pod="openstack/kube-state-metrics-0" Mar 12 13:30:51 crc kubenswrapper[4778]: I0312 13:30:51.606730 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.421166 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4wct6"] Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.422583 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.425073 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-467ql" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.425093 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.425463 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.436683 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4wct6"] Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.450426 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-p67vh"] Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.452061 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.458268 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-p67vh"] Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.551898 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-log-ovn\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.551943 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8efd1e-884d-4963-b69f-04ede0a92267-scripts\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.551963 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-run\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.551979 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6ww\" (UniqueName: \"kubernetes.io/projected/3b8efd1e-884d-4963-b69f-04ede0a92267-kube-api-access-7h6ww\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552005 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-etc-ovs\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552037 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-lib\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552060 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-run-ovn\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552096 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-run\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552113 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsnrr\" (UniqueName: \"kubernetes.io/projected/bd159b65-0c66-4809-949e-0f1babbaa8e6-kube-api-access-lsnrr\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552127 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8efd1e-884d-4963-b69f-04ede0a92267-ovn-controller-tls-certs\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552161 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-log\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552178 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd159b65-0c66-4809-949e-0f1babbaa8e6-scripts\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.552210 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8efd1e-884d-4963-b69f-04ede0a92267-combined-ca-bundle\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.568294 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"663feb48-0ed1-4947-97c3-e0bac206fdb2","Type":"ContainerStarted","Data":"a3429a93d8a521bc99b465d949c9e95c96869fd73c2a7bc50e7d8f8ffa485293"} Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654093 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-log\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654168 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd159b65-0c66-4809-949e-0f1babbaa8e6-scripts\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654216 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8efd1e-884d-4963-b69f-04ede0a92267-combined-ca-bundle\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654248 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-log-ovn\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654288 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8efd1e-884d-4963-b69f-04ede0a92267-scripts\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654312 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-run\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654335 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h6ww\" (UniqueName: \"kubernetes.io/projected/3b8efd1e-884d-4963-b69f-04ede0a92267-kube-api-access-7h6ww\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654363 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-etc-ovs\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654401 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-lib\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654432 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-run-ovn\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654477 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-run\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654501 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsnrr\" (UniqueName: \"kubernetes.io/projected/bd159b65-0c66-4809-949e-0f1babbaa8e6-kube-api-access-lsnrr\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.654523 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8efd1e-884d-4963-b69f-04ede0a92267-ovn-controller-tls-certs\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.655804 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-run\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.655941 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-log\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.657983 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd159b65-0c66-4809-949e-0f1babbaa8e6-scripts\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.658402 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-log-ovn\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.658546 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-var-lib\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.658582 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-run-ovn\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.658714 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bd159b65-0c66-4809-949e-0f1babbaa8e6-etc-ovs\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.662357 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8efd1e-884d-4963-b69f-04ede0a92267-combined-ca-bundle\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.662722 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8efd1e-884d-4963-b69f-04ede0a92267-ovn-controller-tls-certs\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.673376 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h6ww\" (UniqueName: \"kubernetes.io/projected/3b8efd1e-884d-4963-b69f-04ede0a92267-kube-api-access-7h6ww\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.675524 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b8efd1e-884d-4963-b69f-04ede0a92267-scripts\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.675653 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3b8efd1e-884d-4963-b69f-04ede0a92267-var-run\") pod \"ovn-controller-4wct6\" (UID: \"3b8efd1e-884d-4963-b69f-04ede0a92267\") " pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.677565 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsnrr\" (UniqueName: \"kubernetes.io/projected/bd159b65-0c66-4809-949e-0f1babbaa8e6-kube-api-access-lsnrr\") pod \"ovn-controller-ovs-p67vh\" (UID: \"bd159b65-0c66-4809-949e-0f1babbaa8e6\") " pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.747104 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4wct6" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.767266 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.862576 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.954497 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.956068 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.958497 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.959056 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.959359 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.959444 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9h9p5" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.959543 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 12 13:30:54 crc kubenswrapper[4778]: I0312 13:30:54.973667 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.058288 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.058407 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.058552 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7321e15e-673c-4e0d-80f8-6ac644c1940f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.058590 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.058612 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5sss\" (UniqueName: \"kubernetes.io/projected/7321e15e-673c-4e0d-80f8-6ac644c1940f-kube-api-access-j5sss\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.058731 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7321e15e-673c-4e0d-80f8-6ac644c1940f-config\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.058775 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.058805 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7321e15e-673c-4e0d-80f8-6ac644c1940f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.160075 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7321e15e-673c-4e0d-80f8-6ac644c1940f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.160172 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.160257 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5sss\" (UniqueName: \"kubernetes.io/projected/7321e15e-673c-4e0d-80f8-6ac644c1940f-kube-api-access-j5sss\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.160382 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7321e15e-673c-4e0d-80f8-6ac644c1940f-config\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.160426 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.160459 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7321e15e-673c-4e0d-80f8-6ac644c1940f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.160552 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.160610 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.161119 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") device mount path \"/mnt/openstack/pv14\"" pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.161231 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7321e15e-673c-4e0d-80f8-6ac644c1940f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.161447 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7321e15e-673c-4e0d-80f8-6ac644c1940f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.165339 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7321e15e-673c-4e0d-80f8-6ac644c1940f-config\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.168106 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.178734 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.178990 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7321e15e-673c-4e0d-80f8-6ac644c1940f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.181717 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5sss\" (UniqueName: \"kubernetes.io/projected/7321e15e-673c-4e0d-80f8-6ac644c1940f-kube-api-access-j5sss\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.196515 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7321e15e-673c-4e0d-80f8-6ac644c1940f\") " pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:55 crc kubenswrapper[4778]: I0312 13:30:55.272636 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.698414 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.705256 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.707308 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.707763 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.707998 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.708914 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7hqfg" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.711843 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.820841 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.820899 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm2k4\" (UniqueName: \"kubernetes.io/projected/7c951c6f-06fd-4793-a95b-26b5c1400d73-kube-api-access-nm2k4\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.820934 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.820980 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c951c6f-06fd-4793-a95b-26b5c1400d73-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.821021 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c951c6f-06fd-4793-a95b-26b5c1400d73-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.821057 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.821551 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c951c6f-06fd-4793-a95b-26b5c1400d73-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.821647 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.923232 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.923320 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c951c6f-06fd-4793-a95b-26b5c1400d73-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.923341 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.923396 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.923411 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm2k4\" (UniqueName: \"kubernetes.io/projected/7c951c6f-06fd-4793-a95b-26b5c1400d73-kube-api-access-nm2k4\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.923427 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.923445 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c951c6f-06fd-4793-a95b-26b5c1400d73-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.923472 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c951c6f-06fd-4793-a95b-26b5c1400d73-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.924071 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c951c6f-06fd-4793-a95b-26b5c1400d73-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.924356 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.925346 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c951c6f-06fd-4793-a95b-26b5c1400d73-config\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.925414 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c951c6f-06fd-4793-a95b-26b5c1400d73-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.933235 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.934178 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.940047 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c951c6f-06fd-4793-a95b-26b5c1400d73-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.943653 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:58 crc kubenswrapper[4778]: I0312 13:30:58.943828 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm2k4\" (UniqueName: \"kubernetes.io/projected/7c951c6f-06fd-4793-a95b-26b5c1400d73-kube-api-access-nm2k4\") pod \"ovsdbserver-sb-0\" (UID: \"7c951c6f-06fd-4793-a95b-26b5c1400d73\") " pod="openstack/ovsdbserver-sb-0" Mar 12 13:30:59 crc kubenswrapper[4778]: I0312 13:30:59.026405 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 12 13:31:02 crc kubenswrapper[4778]: W0312 13:31:02.838897 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe52f8ba_9053_4733_b2e3_8f1becf437c8.slice/crio-4ed3e2e6288dae34d36e51f67c70e30ae227720d9dfbf94561de11abc987edce WatchSource:0}: Error finding container 4ed3e2e6288dae34d36e51f67c70e30ae227720d9dfbf94561de11abc987edce: Status 404 returned error can't find the container with id 4ed3e2e6288dae34d36e51f67c70e30ae227720d9dfbf94561de11abc987edce Mar 12 13:31:02 crc kubenswrapper[4778]: E0312 13:31:02.858799 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 12 13:31:02 crc kubenswrapper[4778]: E0312 13:31:02.859021 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kcbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:31:02 crc kubenswrapper[4778]: E0312 13:31:02.860533 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03" Mar 12 13:31:02 crc kubenswrapper[4778]: E0312 13:31:02.873054 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Mar 12 13:31:02 crc kubenswrapper[4778]: E0312 13:31:02.873424 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kxn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(1e89dfcc-2ac3-444c-91e8-56991eae096b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:31:02 crc kubenswrapper[4778]: E0312 13:31:02.874614 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="1e89dfcc-2ac3-444c-91e8-56991eae096b" Mar 12 13:31:03 crc kubenswrapper[4778]: I0312 13:31:03.628515 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fe52f8ba-9053-4733-b2e3-8f1becf437c8","Type":"ContainerStarted","Data":"4ed3e2e6288dae34d36e51f67c70e30ae227720d9dfbf94561de11abc987edce"} Mar 12 13:31:03 crc kubenswrapper[4778]: E0312 13:31:03.630290 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03" Mar 12 13:31:03 crc kubenswrapper[4778]: E0312 13:31:03.638791 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="1e89dfcc-2ac3-444c-91e8-56991eae096b" Mar 12 13:31:07 crc kubenswrapper[4778]: I0312 13:31:07.441108 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 12 13:31:07 crc kubenswrapper[4778]: E0312 13:31:07.908701 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 13:31:07 crc kubenswrapper[4778]: E0312 13:31:07.909159 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znmfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-fr6p2_openstack(68c74642-7beb-4cb9-86bf-b12beafb4b68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:31:07 crc kubenswrapper[4778]: E0312 13:31:07.910630 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" podUID="68c74642-7beb-4cb9-86bf-b12beafb4b68" Mar 12 13:31:08 crc kubenswrapper[4778]: I0312 13:31:08.302959 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 12 13:31:08 crc kubenswrapper[4778]: I0312 13:31:08.308892 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:31:09 crc kubenswrapper[4778]: I0312 13:31:09.148258 4778 scope.go:117] "RemoveContainer" containerID="a8f045f157371374b81f9a3098c61d715d2ce620fdfc3121b5f225672622998f" Mar 12 13:31:09 crc kubenswrapper[4778]: W0312 13:31:09.734444 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7321e15e_673c_4e0d_80f8_6ac644c1940f.slice/crio-f36f09305ca04ca2d01c8a383ad06c9da6a054fd0645f5cc5c51b83742f58fac WatchSource:0}: Error finding container f36f09305ca04ca2d01c8a383ad06c9da6a054fd0645f5cc5c51b83742f58fac: Status 404 returned error can't find the container with id f36f09305ca04ca2d01c8a383ad06c9da6a054fd0645f5cc5c51b83742f58fac Mar 12 13:31:09 crc kubenswrapper[4778]: W0312 13:31:09.751215 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec63cc68_6fde_419b_973c_91fc982e6a49.slice/crio-609ddd8d4d204300e5ea572a906e50c99bca89344b4ff9a98871002711d708b9 WatchSource:0}: Error finding container 609ddd8d4d204300e5ea572a906e50c99bca89344b4ff9a98871002711d708b9: Status 404 returned error can't find the container with id 609ddd8d4d204300e5ea572a906e50c99bca89344b4ff9a98871002711d708b9 Mar 12 13:31:09 crc kubenswrapper[4778]: W0312 13:31:09.752167 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66ed2760_88a0_4731_a0d1_52cb6cffa2b1.slice/crio-4e9e8b87b4e8662cb5ee7f6527d7533b6383b322442ecf5f3470e33d6bb4be86 WatchSource:0}: Error finding container 4e9e8b87b4e8662cb5ee7f6527d7533b6383b322442ecf5f3470e33d6bb4be86: Status 404 returned error can't find the container with id 4e9e8b87b4e8662cb5ee7f6527d7533b6383b322442ecf5f3470e33d6bb4be86 Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.813111 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.813641 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsbqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-mcpvm_openstack(b217b876-3c50-4d5e-8c5b-40e3f1d95b6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.815237 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" podUID="b217b876-3c50-4d5e-8c5b-40e3f1d95b6c" Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.849338 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.849496 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcntw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2p4pj_openstack(c10b98ea-d832-471e-adb6-c22c4dbb0ab8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.851397 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" podUID="c10b98ea-d832-471e-adb6-c22c4dbb0ab8" Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.890910 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.891059 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwddk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-d9gsf_openstack(78c6f209-08e0-4789-be6e-8c319547338c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:31:09 crc kubenswrapper[4778]: E0312 13:31:09.892603 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" podUID="78c6f209-08e0-4789-be6e-8c319547338c" Mar 12 13:31:09 crc kubenswrapper[4778]: I0312 13:31:09.905222 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:31:09 crc kubenswrapper[4778]: I0312 13:31:09.952031 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-config\") pod \"68c74642-7beb-4cb9-86bf-b12beafb4b68\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " Mar 12 13:31:09 crc kubenswrapper[4778]: I0312 13:31:09.952234 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-dns-svc\") pod \"68c74642-7beb-4cb9-86bf-b12beafb4b68\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " Mar 12 13:31:09 crc kubenswrapper[4778]: I0312 13:31:09.952291 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znmfx\" (UniqueName: \"kubernetes.io/projected/68c74642-7beb-4cb9-86bf-b12beafb4b68-kube-api-access-znmfx\") pod \"68c74642-7beb-4cb9-86bf-b12beafb4b68\" (UID: \"68c74642-7beb-4cb9-86bf-b12beafb4b68\") " Mar 12 13:31:09 crc kubenswrapper[4778]: I0312 13:31:09.953079 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68c74642-7beb-4cb9-86bf-b12beafb4b68" (UID: "68c74642-7beb-4cb9-86bf-b12beafb4b68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:09 crc kubenswrapper[4778]: I0312 13:31:09.953065 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-config" (OuterVolumeSpecName: "config") pod "68c74642-7beb-4cb9-86bf-b12beafb4b68" (UID: "68c74642-7beb-4cb9-86bf-b12beafb4b68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:09 crc kubenswrapper[4778]: I0312 13:31:09.955699 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c74642-7beb-4cb9-86bf-b12beafb4b68-kube-api-access-znmfx" (OuterVolumeSpecName: "kube-api-access-znmfx") pod "68c74642-7beb-4cb9-86bf-b12beafb4b68" (UID: "68c74642-7beb-4cb9-86bf-b12beafb4b68"). InnerVolumeSpecName "kube-api-access-znmfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.054122 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.054547 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68c74642-7beb-4cb9-86bf-b12beafb4b68-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.054562 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znmfx\" (UniqueName: \"kubernetes.io/projected/68c74642-7beb-4cb9-86bf-b12beafb4b68-kube-api-access-znmfx\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.321568 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4wct6"] Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.356387 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-p67vh"] Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.443888 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 12 13:31:10 crc kubenswrapper[4778]: W0312 13:31:10.448814 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c951c6f_06fd_4793_a95b_26b5c1400d73.slice/crio-2c2cc1bf1d75f1a721912ce94ba922071438325eb2db84fb97c7b161e7ac53fa WatchSource:0}: Error finding container 2c2cc1bf1d75f1a721912ce94ba922071438325eb2db84fb97c7b161e7ac53fa: Status 404 returned error can't find the container with id 2c2cc1bf1d75f1a721912ce94ba922071438325eb2db84fb97c7b161e7ac53fa Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.686090 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4wct6" event={"ID":"3b8efd1e-884d-4963-b69f-04ede0a92267","Type":"ContainerStarted","Data":"464be4b7c2eaf4085b25163a77ba143b69ebcc719930ac53a1cbfdbbc77387a5"} Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.687584 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" event={"ID":"68c74642-7beb-4cb9-86bf-b12beafb4b68","Type":"ContainerDied","Data":"f62bb5eb4a941e7684e59f1fde389da442d607aaf8bb4fcbc4a589c4e0b98935"} Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.687669 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fr6p2" Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.688910 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c951c6f-06fd-4793-a95b-26b5c1400d73","Type":"ContainerStarted","Data":"2c2cc1bf1d75f1a721912ce94ba922071438325eb2db84fb97c7b161e7ac53fa"} Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.691399 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7321e15e-673c-4e0d-80f8-6ac644c1940f","Type":"ContainerStarted","Data":"f36f09305ca04ca2d01c8a383ad06c9da6a054fd0645f5cc5c51b83742f58fac"} Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.692469 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"66ed2760-88a0-4731-a0d1-52cb6cffa2b1","Type":"ContainerStarted","Data":"4e9e8b87b4e8662cb5ee7f6527d7533b6383b322442ecf5f3470e33d6bb4be86"} Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.695239 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"663feb48-0ed1-4947-97c3-e0bac206fdb2","Type":"ContainerStarted","Data":"73048822b2ee0f9b1c1f7f4661f73503814f16f141df2b6300c8112edc68f8fa"} Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.697627 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fe52f8ba-9053-4733-b2e3-8f1becf437c8","Type":"ContainerStarted","Data":"c915e53efe3d1f80e839ccda66e0bb16e04555c81f0a30ce27190c773550f885"} Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.700977 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p67vh" event={"ID":"bd159b65-0c66-4809-949e-0f1babbaa8e6","Type":"ContainerStarted","Data":"f3170f6c74a1959c49c161a18d30bea16a54ddcbcbf5342404e2b1ea295b59dd"} Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.702231 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ec63cc68-6fde-419b-973c-91fc982e6a49","Type":"ContainerStarted","Data":"609ddd8d4d204300e5ea572a906e50c99bca89344b4ff9a98871002711d708b9"} Mar 12 13:31:10 crc kubenswrapper[4778]: E0312 13:31:10.703706 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" podUID="c10b98ea-d832-471e-adb6-c22c4dbb0ab8" Mar 12 13:31:10 crc kubenswrapper[4778]: E0312 13:31:10.703973 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" podUID="78c6f209-08e0-4789-be6e-8c319547338c" Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.739238 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fr6p2"] Mar 12 13:31:10 crc kubenswrapper[4778]: I0312 13:31:10.762329 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fr6p2"] Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.340519 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.389377 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-config\") pod \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\" (UID: \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\") " Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.389452 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsbqf\" (UniqueName: \"kubernetes.io/projected/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-kube-api-access-hsbqf\") pod \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\" (UID: \"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c\") " Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.390300 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-config" (OuterVolumeSpecName: "config") pod "b217b876-3c50-4d5e-8c5b-40e3f1d95b6c" (UID: "b217b876-3c50-4d5e-8c5b-40e3f1d95b6c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.401485 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-kube-api-access-hsbqf" (OuterVolumeSpecName: "kube-api-access-hsbqf") pod "b217b876-3c50-4d5e-8c5b-40e3f1d95b6c" (UID: "b217b876-3c50-4d5e-8c5b-40e3f1d95b6c"). InnerVolumeSpecName "kube-api-access-hsbqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.491489 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsbqf\" (UniqueName: \"kubernetes.io/projected/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-kube-api-access-hsbqf\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.491522 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.710441 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" event={"ID":"b217b876-3c50-4d5e-8c5b-40e3f1d95b6c","Type":"ContainerDied","Data":"f54a79ac4265517d4761f6dc0e556ce29441b7767ec9275aa7bd3cc4d56d57eb"} Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.710510 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mcpvm" Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.780462 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mcpvm"] Mar 12 13:31:11 crc kubenswrapper[4778]: I0312 13:31:11.790360 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mcpvm"] Mar 12 13:31:12 crc kubenswrapper[4778]: I0312 13:31:12.263772 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c74642-7beb-4cb9-86bf-b12beafb4b68" path="/var/lib/kubelet/pods/68c74642-7beb-4cb9-86bf-b12beafb4b68/volumes" Mar 12 13:31:12 crc kubenswrapper[4778]: I0312 13:31:12.264387 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b217b876-3c50-4d5e-8c5b-40e3f1d95b6c" path="/var/lib/kubelet/pods/b217b876-3c50-4d5e-8c5b-40e3f1d95b6c/volumes" Mar 12 13:31:13 crc kubenswrapper[4778]: I0312 13:31:13.730787 4778 generic.go:334] "Generic (PLEG): container finished" podID="663feb48-0ed1-4947-97c3-e0bac206fdb2" containerID="73048822b2ee0f9b1c1f7f4661f73503814f16f141df2b6300c8112edc68f8fa" exitCode=0 Mar 12 13:31:13 crc kubenswrapper[4778]: I0312 13:31:13.730914 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"663feb48-0ed1-4947-97c3-e0bac206fdb2","Type":"ContainerDied","Data":"73048822b2ee0f9b1c1f7f4661f73503814f16f141df2b6300c8112edc68f8fa"} Mar 12 13:31:13 crc kubenswrapper[4778]: I0312 13:31:13.733993 4778 generic.go:334] "Generic (PLEG): container finished" podID="fe52f8ba-9053-4733-b2e3-8f1becf437c8" containerID="c915e53efe3d1f80e839ccda66e0bb16e04555c81f0a30ce27190c773550f885" exitCode=0 Mar 12 13:31:13 crc kubenswrapper[4778]: I0312 13:31:13.734055 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fe52f8ba-9053-4733-b2e3-8f1becf437c8","Type":"ContainerDied","Data":"c915e53efe3d1f80e839ccda66e0bb16e04555c81f0a30ce27190c773550f885"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.755970 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fe52f8ba-9053-4733-b2e3-8f1becf437c8","Type":"ContainerStarted","Data":"6b923e83418fc8dd0c8a9c1863eed3de992759ea4fff0ea9e932755b38e6a24f"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.758815 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c951c6f-06fd-4793-a95b-26b5c1400d73","Type":"ContainerStarted","Data":"03daa5981785dae12ef34bc2860b3ebbb499a62f1e5266d3cbe2c29b9bf0010a"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.763881 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7321e15e-673c-4e0d-80f8-6ac644c1940f","Type":"ContainerStarted","Data":"6d8fb734a26f01a2db24b32fcf641a12cbbff7e4c8e2774e61faa1124eea8858"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.770669 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"66ed2760-88a0-4731-a0d1-52cb6cffa2b1","Type":"ContainerStarted","Data":"6addcbc9f6e1bd0c36c2127749a9343943bce9503688868083bfb8596a8eda94"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.770800 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.774359 4778 generic.go:334] "Generic (PLEG): container finished" podID="bd159b65-0c66-4809-949e-0f1babbaa8e6" containerID="3d94d19b11275c13d335ced4fca61c11564e4abcc6e74ebdb428d3364a8ab591" exitCode=0 Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.774430 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p67vh" event={"ID":"bd159b65-0c66-4809-949e-0f1babbaa8e6","Type":"ContainerDied","Data":"3d94d19b11275c13d335ced4fca61c11564e4abcc6e74ebdb428d3364a8ab591"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.793822 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ec63cc68-6fde-419b-973c-91fc982e6a49","Type":"ContainerStarted","Data":"cfc94a4126c33c3c862f26c73b0369926b16397e57b29a1116690c8f54a89d03"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.794251 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.824070 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"663feb48-0ed1-4947-97c3-e0bac206fdb2","Type":"ContainerStarted","Data":"c75e427a712dc07d0e96cd91a97f38ade2a32cd7a1f6258673afbbaa743ba85d"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.840836 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.789657983 podStartE2EDuration="28.840818621s" podCreationTimestamp="2026-03-12 13:30:47 +0000 UTC" firstStartedPulling="2026-03-12 13:31:02.842499956 +0000 UTC m=+1281.291195362" lastFinishedPulling="2026-03-12 13:31:09.893660614 +0000 UTC m=+1288.342356000" observedRunningTime="2026-03-12 13:31:15.787038202 +0000 UTC m=+1294.235733598" watchObservedRunningTime="2026-03-12 13:31:15.840818621 +0000 UTC m=+1294.289514017" Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.843215 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4wct6" event={"ID":"3b8efd1e-884d-4963-b69f-04ede0a92267","Type":"ContainerStarted","Data":"891e4089d91ffd634099bdf5e4625738b33178ff8f1fcc2a63c8701621639047"} Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.843365 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4wct6" Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.877335 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.255888923 podStartE2EDuration="24.877313499s" podCreationTimestamp="2026-03-12 13:30:51 +0000 UTC" firstStartedPulling="2026-03-12 13:31:09.788524144 +0000 UTC m=+1288.237219540" lastFinishedPulling="2026-03-12 13:31:15.40994872 +0000 UTC m=+1293.858644116" observedRunningTime="2026-03-12 13:31:15.872612865 +0000 UTC m=+1294.321308261" watchObservedRunningTime="2026-03-12 13:31:15.877313499 +0000 UTC m=+1294.326008885" Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.909011 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=13.611291679 podStartE2EDuration="29.908992979s" podCreationTimestamp="2026-03-12 13:30:46 +0000 UTC" firstStartedPulling="2026-03-12 13:30:53.582983245 +0000 UTC m=+1272.031678641" lastFinishedPulling="2026-03-12 13:31:09.880684545 +0000 UTC m=+1288.329379941" observedRunningTime="2026-03-12 13:31:15.906339914 +0000 UTC m=+1294.355035320" watchObservedRunningTime="2026-03-12 13:31:15.908992979 +0000 UTC m=+1294.357688375" Mar 12 13:31:15 crc kubenswrapper[4778]: I0312 13:31:15.999047 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=22.212567884 podStartE2EDuration="26.999022229s" podCreationTimestamp="2026-03-12 13:30:49 +0000 UTC" firstStartedPulling="2026-03-12 13:31:09.780687341 +0000 UTC m=+1288.229382737" lastFinishedPulling="2026-03-12 13:31:14.567141686 +0000 UTC m=+1293.015837082" observedRunningTime="2026-03-12 13:31:15.942571214 +0000 UTC m=+1294.391266620" watchObservedRunningTime="2026-03-12 13:31:15.999022229 +0000 UTC m=+1294.447717645" Mar 12 13:31:16 crc kubenswrapper[4778]: I0312 13:31:16.001311 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4wct6" podStartSLOduration=17.711347796 podStartE2EDuration="22.001301244s" podCreationTimestamp="2026-03-12 13:30:54 +0000 UTC" firstStartedPulling="2026-03-12 13:31:10.343672048 +0000 UTC m=+1288.792367444" lastFinishedPulling="2026-03-12 13:31:14.633625506 +0000 UTC m=+1293.082320892" observedRunningTime="2026-03-12 13:31:15.975623354 +0000 UTC m=+1294.424318760" watchObservedRunningTime="2026-03-12 13:31:16.001301244 +0000 UTC m=+1294.449996650" Mar 12 13:31:16 crc kubenswrapper[4778]: I0312 13:31:16.848245 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p67vh" event={"ID":"bd159b65-0c66-4809-949e-0f1babbaa8e6","Type":"ContainerStarted","Data":"66bdc50de15bbeb963b21f1399497b80530e6a48047c84bd0860d72204092943"} Mar 12 13:31:16 crc kubenswrapper[4778]: I0312 13:31:16.848594 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-p67vh" event={"ID":"bd159b65-0c66-4809-949e-0f1babbaa8e6","Type":"ContainerStarted","Data":"ddac7e774dda104b000b6e9560b67447ae0e9b37ef71be096f71fee4dc4966de"} Mar 12 13:31:16 crc kubenswrapper[4778]: E0312 13:31:16.920948 4778 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.32:41514->38.129.56.32:35979: write tcp 38.129.56.32:41514->38.129.56.32:35979: write: broken pipe Mar 12 13:31:17 crc kubenswrapper[4778]: I0312 13:31:17.744089 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 12 13:31:17 crc kubenswrapper[4778]: I0312 13:31:17.744497 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 12 13:31:17 crc kubenswrapper[4778]: I0312 13:31:17.859770 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1e89dfcc-2ac3-444c-91e8-56991eae096b","Type":"ContainerStarted","Data":"491cf83ea2b0803c619e4110e5a18dd9c9b6e2cc2bfd596357f59a6a18312dee"} Mar 12 13:31:17 crc kubenswrapper[4778]: I0312 13:31:17.859835 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:31:17 crc kubenswrapper[4778]: I0312 13:31:17.859872 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:31:17 crc kubenswrapper[4778]: I0312 13:31:17.893868 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-p67vh" podStartSLOduration=19.618817742 podStartE2EDuration="23.893848435s" podCreationTimestamp="2026-03-12 13:30:54 +0000 UTC" firstStartedPulling="2026-03-12 13:31:10.360497447 +0000 UTC m=+1288.809192843" lastFinishedPulling="2026-03-12 13:31:14.63552814 +0000 UTC m=+1293.084223536" observedRunningTime="2026-03-12 13:31:16.869882221 +0000 UTC m=+1295.318577637" watchObservedRunningTime="2026-03-12 13:31:17.893848435 +0000 UTC m=+1296.342543831" Mar 12 13:31:18 crc kubenswrapper[4778]: I0312 13:31:18.868477 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7321e15e-673c-4e0d-80f8-6ac644c1940f","Type":"ContainerStarted","Data":"9d1bd71357006dd049f38bd4772731fe6910d7e7f99da405a22aae4c83d47a42"} Mar 12 13:31:18 crc kubenswrapper[4778]: I0312 13:31:18.872923 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7c951c6f-06fd-4793-a95b-26b5c1400d73","Type":"ContainerStarted","Data":"efc582c388e483fa01b639f70125dc0c94e861db5b78afcdb00e31bf1cc42a61"} Mar 12 13:31:18 crc kubenswrapper[4778]: I0312 13:31:18.893726 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.142606981 podStartE2EDuration="25.893700614s" podCreationTimestamp="2026-03-12 13:30:53 +0000 UTC" firstStartedPulling="2026-03-12 13:31:09.737676338 +0000 UTC m=+1288.186371734" lastFinishedPulling="2026-03-12 13:31:18.488769971 +0000 UTC m=+1296.937465367" observedRunningTime="2026-03-12 13:31:18.887267251 +0000 UTC m=+1297.335962677" watchObservedRunningTime="2026-03-12 13:31:18.893700614 +0000 UTC m=+1297.342396020" Mar 12 13:31:18 crc kubenswrapper[4778]: I0312 13:31:18.911748 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.887111491 podStartE2EDuration="21.911726467s" podCreationTimestamp="2026-03-12 13:30:57 +0000 UTC" firstStartedPulling="2026-03-12 13:31:10.451872145 +0000 UTC m=+1288.900567541" lastFinishedPulling="2026-03-12 13:31:18.476487131 +0000 UTC m=+1296.925182517" observedRunningTime="2026-03-12 13:31:18.909677448 +0000 UTC m=+1297.358372834" watchObservedRunningTime="2026-03-12 13:31:18.911726467 +0000 UTC m=+1297.360421873" Mar 12 13:31:19 crc kubenswrapper[4778]: I0312 13:31:19.026573 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 12 13:31:19 crc kubenswrapper[4778]: I0312 13:31:19.097365 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 12 13:31:19 crc kubenswrapper[4778]: I0312 13:31:19.097441 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 12 13:31:19 crc kubenswrapper[4778]: I0312 13:31:19.272944 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 12 13:31:19 crc kubenswrapper[4778]: I0312 13:31:19.324320 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 12 13:31:19 crc kubenswrapper[4778]: I0312 13:31:19.431423 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 12 13:31:19 crc kubenswrapper[4778]: I0312 13:31:19.880053 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 12 13:31:19 crc kubenswrapper[4778]: I0312 13:31:19.932896 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.026574 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.164403 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.204797 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2p4pj"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.246948 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vtt4z"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.248241 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.252348 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.278390 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kzfk7"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.279855 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vtt4z"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.279978 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.282468 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.287738 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kzfk7"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355068 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-config\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355458 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-combined-ca-bundle\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355527 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-ovn-rundir\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355569 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmjgm\" (UniqueName: \"kubernetes.io/projected/199c7ab7-ef93-4b96-a76c-2476f21795ae-kube-api-access-jmjgm\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355644 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355672 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdxh5\" (UniqueName: \"kubernetes.io/projected/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-kube-api-access-jdxh5\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355701 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355727 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-config\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355783 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-ovs-rundir\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.355809 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.388179 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458024 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-ovn-rundir\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458117 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmjgm\" (UniqueName: \"kubernetes.io/projected/199c7ab7-ef93-4b96-a76c-2476f21795ae-kube-api-access-jmjgm\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458212 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458247 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdxh5\" (UniqueName: \"kubernetes.io/projected/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-kube-api-access-jdxh5\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458310 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458342 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-config\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458499 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-ovs-rundir\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458564 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458686 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-config\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458706 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-ovn-rundir\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.458737 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-combined-ca-bundle\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.459141 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-ovs-rundir\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.459180 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.459666 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.459900 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-config\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.459931 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-config\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.468803 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.468837 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-combined-ca-bundle\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.479028 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdxh5\" (UniqueName: \"kubernetes.io/projected/a8484e5d-6f77-407c-81db-0d9b2a6b37fd-kube-api-access-jdxh5\") pod \"ovn-controller-metrics-vtt4z\" (UID: \"a8484e5d-6f77-407c-81db-0d9b2a6b37fd\") " pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.479531 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmjgm\" (UniqueName: \"kubernetes.io/projected/199c7ab7-ef93-4b96-a76c-2476f21795ae-kube-api-access-jmjgm\") pod \"dnsmasq-dns-7fd796d7df-kzfk7\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.489724 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.558247 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-d9gsf"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.579710 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jsqnb"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.582105 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vtt4z" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.584087 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.588015 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.623310 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.638704 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jsqnb"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.661758 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5589g\" (UniqueName: \"kubernetes.io/projected/124fc095-41fd-4e2d-86a1-0aada5c7447f-kube-api-access-5589g\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.661840 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.661894 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-config\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.662044 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.662098 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.740035 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.756432 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-886c-account-create-update-c7kqb"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.759748 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.762712 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.763379 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.763447 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.763483 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5589g\" (UniqueName: \"kubernetes.io/projected/124fc095-41fd-4e2d-86a1-0aada5c7447f-kube-api-access-5589g\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.763525 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.763566 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-config\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.764619 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.764642 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.764922 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.765767 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-config\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.765819 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-886c-account-create-update-c7kqb"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.817038 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5589g\" (UniqueName: \"kubernetes.io/projected/124fc095-41fd-4e2d-86a1-0aada5c7447f-kube-api-access-5589g\") pod \"dnsmasq-dns-86db49b7ff-jsqnb\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.820975 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gccjh"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.821980 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gccjh" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.829343 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gccjh"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.865792 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-config\") pod \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.865953 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcntw\" (UniqueName: \"kubernetes.io/projected/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-kube-api-access-zcntw\") pod \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.866012 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-dns-svc\") pod \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\" (UID: \"c10b98ea-d832-471e-adb6-c22c4dbb0ab8\") " Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.866162 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfr4k\" (UniqueName: \"kubernetes.io/projected/fc051b32-4b28-4011-9a00-49caa730f074-kube-api-access-wfr4k\") pod \"placement-db-create-gccjh\" (UID: \"fc051b32-4b28-4011-9a00-49caa730f074\") " pod="openstack/placement-db-create-gccjh" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.866279 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc051b32-4b28-4011-9a00-49caa730f074-operator-scripts\") pod \"placement-db-create-gccjh\" (UID: \"fc051b32-4b28-4011-9a00-49caa730f074\") " pod="openstack/placement-db-create-gccjh" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.866304 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjpbn\" (UniqueName: \"kubernetes.io/projected/7b329f80-bb88-4c5c-91eb-24394cdcc492-kube-api-access-bjpbn\") pod \"placement-886c-account-create-update-c7kqb\" (UID: \"7b329f80-bb88-4c5c-91eb-24394cdcc492\") " pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.866342 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b329f80-bb88-4c5c-91eb-24394cdcc492-operator-scripts\") pod \"placement-886c-account-create-update-c7kqb\" (UID: \"7b329f80-bb88-4c5c-91eb-24394cdcc492\") " pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.867177 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-config" (OuterVolumeSpecName: "config") pod "c10b98ea-d832-471e-adb6-c22c4dbb0ab8" (UID: "c10b98ea-d832-471e-adb6-c22c4dbb0ab8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.867229 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c10b98ea-d832-471e-adb6-c22c4dbb0ab8" (UID: "c10b98ea-d832-471e-adb6-c22c4dbb0ab8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.871817 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-kube-api-access-zcntw" (OuterVolumeSpecName: "kube-api-access-zcntw") pod "c10b98ea-d832-471e-adb6-c22c4dbb0ab8" (UID: "c10b98ea-d832-471e-adb6-c22c4dbb0ab8"). InnerVolumeSpecName "kube-api-access-zcntw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.887630 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" event={"ID":"c10b98ea-d832-471e-adb6-c22c4dbb0ab8","Type":"ContainerDied","Data":"0799732deec1f1a8aef551ea0f0b4139ada27fd6ead6c91498a4273deb0bea7d"} Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.887719 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2p4pj" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.894375 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03","Type":"ContainerStarted","Data":"4fe9b07cd1599e91138683ca30e9da84b4bd93250ce15e01fd43967606252649"} Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.944146 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.949265 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.970551 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc051b32-4b28-4011-9a00-49caa730f074-operator-scripts\") pod \"placement-db-create-gccjh\" (UID: \"fc051b32-4b28-4011-9a00-49caa730f074\") " pod="openstack/placement-db-create-gccjh" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.970616 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjpbn\" (UniqueName: \"kubernetes.io/projected/7b329f80-bb88-4c5c-91eb-24394cdcc492-kube-api-access-bjpbn\") pod \"placement-886c-account-create-update-c7kqb\" (UID: \"7b329f80-bb88-4c5c-91eb-24394cdcc492\") " pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.970728 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b329f80-bb88-4c5c-91eb-24394cdcc492-operator-scripts\") pod \"placement-886c-account-create-update-c7kqb\" (UID: \"7b329f80-bb88-4c5c-91eb-24394cdcc492\") " pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.970790 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfr4k\" (UniqueName: \"kubernetes.io/projected/fc051b32-4b28-4011-9a00-49caa730f074-kube-api-access-wfr4k\") pod \"placement-db-create-gccjh\" (UID: \"fc051b32-4b28-4011-9a00-49caa730f074\") " pod="openstack/placement-db-create-gccjh" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.970894 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2p4pj"] Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.970917 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcntw\" (UniqueName: \"kubernetes.io/projected/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-kube-api-access-zcntw\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.970934 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.970975 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10b98ea-d832-471e-adb6-c22c4dbb0ab8-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.973531 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc051b32-4b28-4011-9a00-49caa730f074-operator-scripts\") pod \"placement-db-create-gccjh\" (UID: \"fc051b32-4b28-4011-9a00-49caa730f074\") " pod="openstack/placement-db-create-gccjh" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.981402 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b329f80-bb88-4c5c-91eb-24394cdcc492-operator-scripts\") pod \"placement-886c-account-create-update-c7kqb\" (UID: \"7b329f80-bb88-4c5c-91eb-24394cdcc492\") " pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.993003 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjpbn\" (UniqueName: \"kubernetes.io/projected/7b329f80-bb88-4c5c-91eb-24394cdcc492-kube-api-access-bjpbn\") pod \"placement-886c-account-create-update-c7kqb\" (UID: \"7b329f80-bb88-4c5c-91eb-24394cdcc492\") " pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:20 crc kubenswrapper[4778]: I0312 13:31:20.996901 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.001732 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfr4k\" (UniqueName: \"kubernetes.io/projected/fc051b32-4b28-4011-9a00-49caa730f074-kube-api-access-wfr4k\") pod \"placement-db-create-gccjh\" (UID: \"fc051b32-4b28-4011-9a00-49caa730f074\") " pod="openstack/placement-db-create-gccjh" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.018127 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2p4pj"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.072875 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-dns-svc\") pod \"78c6f209-08e0-4789-be6e-8c319547338c\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.072947 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwddk\" (UniqueName: \"kubernetes.io/projected/78c6f209-08e0-4789-be6e-8c319547338c-kube-api-access-hwddk\") pod \"78c6f209-08e0-4789-be6e-8c319547338c\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.073141 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-config\") pod \"78c6f209-08e0-4789-be6e-8c319547338c\" (UID: \"78c6f209-08e0-4789-be6e-8c319547338c\") " Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.074392 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-config" (OuterVolumeSpecName: "config") pod "78c6f209-08e0-4789-be6e-8c319547338c" (UID: "78c6f209-08e0-4789-be6e-8c319547338c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.076177 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78c6f209-08e0-4789-be6e-8c319547338c" (UID: "78c6f209-08e0-4789-be6e-8c319547338c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.081420 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c6f209-08e0-4789-be6e-8c319547338c-kube-api-access-hwddk" (OuterVolumeSpecName: "kube-api-access-hwddk") pod "78c6f209-08e0-4789-be6e-8c319547338c" (UID: "78c6f209-08e0-4789-be6e-8c319547338c"). InnerVolumeSpecName "kube-api-access-hwddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.106652 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.108046 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.112615 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.112797 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.112942 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.113447 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5tksr" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.133824 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.161574 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175167 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b25f9c9-784a-4a52-9bb3-02c6c4592702-config\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175287 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b25f9c9-784a-4a52-9bb3-02c6c4592702-scripts\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175343 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175373 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175450 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b25f9c9-784a-4a52-9bb3-02c6c4592702-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175488 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rptgc\" (UniqueName: \"kubernetes.io/projected/1b25f9c9-784a-4a52-9bb3-02c6c4592702-kube-api-access-rptgc\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175526 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175584 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175608 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78c6f209-08e0-4789-be6e-8c319547338c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.175622 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwddk\" (UniqueName: \"kubernetes.io/projected/78c6f209-08e0-4789-be6e-8c319547338c-kube-api-access-hwddk\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.181666 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gccjh" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.250655 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kzfk7"] Mar 12 13:31:21 crc kubenswrapper[4778]: W0312 13:31:21.264376 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod199c7ab7_ef93_4b96_a76c_2476f21795ae.slice/crio-5f1e3433b2aa51609fb7a612d3a57e0e4f8b5c1392d5055466098db076137d25 WatchSource:0}: Error finding container 5f1e3433b2aa51609fb7a612d3a57e0e4f8b5c1392d5055466098db076137d25: Status 404 returned error can't find the container with id 5f1e3433b2aa51609fb7a612d3a57e0e4f8b5c1392d5055466098db076137d25 Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.277724 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b25f9c9-784a-4a52-9bb3-02c6c4592702-config\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.277785 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b25f9c9-784a-4a52-9bb3-02c6c4592702-scripts\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.277831 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.277861 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.277916 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b25f9c9-784a-4a52-9bb3-02c6c4592702-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.277948 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rptgc\" (UniqueName: \"kubernetes.io/projected/1b25f9c9-784a-4a52-9bb3-02c6c4592702-kube-api-access-rptgc\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.277985 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.280049 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b25f9c9-784a-4a52-9bb3-02c6c4592702-config\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.281840 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b25f9c9-784a-4a52-9bb3-02c6c4592702-scripts\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.282460 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.282471 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.283094 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b25f9c9-784a-4a52-9bb3-02c6c4592702-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.288436 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b25f9c9-784a-4a52-9bb3-02c6c4592702-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.303939 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rptgc\" (UniqueName: \"kubernetes.io/projected/1b25f9c9-784a-4a52-9bb3-02c6c4592702-kube-api-access-rptgc\") pod \"ovn-northd-0\" (UID: \"1b25f9c9-784a-4a52-9bb3-02c6c4592702\") " pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.389547 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vtt4z"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.426948 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-886c-account-create-update-c7kqb"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.434733 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.542650 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jsqnb"] Mar 12 13:31:21 crc kubenswrapper[4778]: W0312 13:31:21.581348 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod124fc095_41fd_4e2d_86a1_0aada5c7447f.slice/crio-a6bf49931f5dff6309127ba2cd8c8cdab17c7cf48d3d19393ec4e377e3b30b19 WatchSource:0}: Error finding container a6bf49931f5dff6309127ba2cd8c8cdab17c7cf48d3d19393ec4e377e3b30b19: Status 404 returned error can't find the container with id a6bf49931f5dff6309127ba2cd8c8cdab17c7cf48d3d19393ec4e377e3b30b19 Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.618547 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.631411 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gccjh"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.734910 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kzfk7"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.774144 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-8rkss"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.782127 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8rkss"] Mar 12 13:31:21 crc kubenswrapper[4778]: I0312 13:31:21.782247 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.005619 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vtt4z" event={"ID":"a8484e5d-6f77-407c-81db-0d9b2a6b37fd","Type":"ContainerStarted","Data":"3d9fbf21819b1d517e03213dbd3933cd631f8d1c8b563683a6040fea42ca66f5"} Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.009257 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gccjh" event={"ID":"fc051b32-4b28-4011-9a00-49caa730f074","Type":"ContainerStarted","Data":"58e88409e0c9f35402bf7f7052b8bfab0894113d1308d624a95e9e78a692921f"} Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.011366 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" event={"ID":"199c7ab7-ef93-4b96-a76c-2476f21795ae","Type":"ContainerStarted","Data":"5f1e3433b2aa51609fb7a612d3a57e0e4f8b5c1392d5055466098db076137d25"} Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.014888 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" event={"ID":"124fc095-41fd-4e2d-86a1-0aada5c7447f","Type":"ContainerStarted","Data":"a6bf49931f5dff6309127ba2cd8c8cdab17c7cf48d3d19393ec4e377e3b30b19"} Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.016897 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-886c-account-create-update-c7kqb" event={"ID":"7b329f80-bb88-4c5c-91eb-24394cdcc492","Type":"ContainerStarted","Data":"4931bee01e54e487e69bca458cb118194d8730ef5cb3f2125d81402dd58d404b"} Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.021880 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.021873 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-d9gsf" event={"ID":"78c6f209-08e0-4789-be6e-8c319547338c","Type":"ContainerDied","Data":"e3e2879145875855639170cdeac27dda0895e629f9eadf854b4f0adb8048db0a"} Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.022752 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.090270 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26t5\" (UniqueName: \"kubernetes.io/projected/5dd405d8-c82b-49d0-a871-1c7c847638df-kube-api-access-r26t5\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.090690 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.090730 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-dns-svc\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.090775 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.090866 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-config\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.144458 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-d9gsf"] Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.153661 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.160751 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-d9gsf"] Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.192276 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.192333 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-dns-svc\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.192409 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.192544 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-config\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.192616 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r26t5\" (UniqueName: \"kubernetes.io/projected/5dd405d8-c82b-49d0-a871-1c7c847638df-kube-api-access-r26t5\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.195336 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.196022 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-dns-svc\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.202359 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-config\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.202472 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.216540 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.221763 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26t5\" (UniqueName: \"kubernetes.io/projected/5dd405d8-c82b-49d0-a871-1c7c847638df-kube-api-access-r26t5\") pod \"dnsmasq-dns-698758b865-8rkss\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.274651 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c6f209-08e0-4789-be6e-8c319547338c" path="/var/lib/kubelet/pods/78c6f209-08e0-4789-be6e-8c319547338c/volumes" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.276580 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10b98ea-d832-471e-adb6-c22c4dbb0ab8" path="/var/lib/kubelet/pods/c10b98ea-d832-471e-adb6-c22c4dbb0ab8/volumes" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.362163 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:22 crc kubenswrapper[4778]: I0312 13:31:22.870478 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8rkss"] Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.034131 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b25f9c9-784a-4a52-9bb3-02c6c4592702","Type":"ContainerStarted","Data":"6b6d08caea52a77ace32f1739d7850b5128d6bc8c08fd6572fc7842be0f72e21"} Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.044698 4778 generic.go:334] "Generic (PLEG): container finished" podID="124fc095-41fd-4e2d-86a1-0aada5c7447f" containerID="2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9" exitCode=0 Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.044798 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" event={"ID":"124fc095-41fd-4e2d-86a1-0aada5c7447f","Type":"ContainerDied","Data":"2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9"} Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.048222 4778 generic.go:334] "Generic (PLEG): container finished" podID="7b329f80-bb88-4c5c-91eb-24394cdcc492" containerID="2c45c4ddf823adba305999f51111b5e3abaff88105a2366fb93304b13b53f40d" exitCode=0 Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.048411 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-886c-account-create-update-c7kqb" event={"ID":"7b329f80-bb88-4c5c-91eb-24394cdcc492","Type":"ContainerDied","Data":"2c45c4ddf823adba305999f51111b5e3abaff88105a2366fb93304b13b53f40d"} Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.053003 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8rkss" event={"ID":"5dd405d8-c82b-49d0-a871-1c7c847638df","Type":"ContainerStarted","Data":"0cf55f4c77e0e83cbfd4fa4c9d04d1940beb400c64b78ffec689c21b7bd18ebf"} Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.054338 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vtt4z" event={"ID":"a8484e5d-6f77-407c-81db-0d9b2a6b37fd","Type":"ContainerStarted","Data":"53eaf5195f1cd4ce28bf1a0b3cd6d8b74553ecbdd4b4f588c17d138d4544e87a"} Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.058463 4778 generic.go:334] "Generic (PLEG): container finished" podID="fc051b32-4b28-4011-9a00-49caa730f074" containerID="d8bdc9c2c4e5e8d5384ac13e3814d6ad0bf996923ba03462051d4c078107d461" exitCode=0 Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.058618 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gccjh" event={"ID":"fc051b32-4b28-4011-9a00-49caa730f074","Type":"ContainerDied","Data":"d8bdc9c2c4e5e8d5384ac13e3814d6ad0bf996923ba03462051d4c078107d461"} Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.060398 4778 generic.go:334] "Generic (PLEG): container finished" podID="199c7ab7-ef93-4b96-a76c-2476f21795ae" containerID="4d8c29e71c21a3ab92dc60d0b4d6da588df8f7c85dce7dcda5be210ad42005de" exitCode=0 Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.061413 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" event={"ID":"199c7ab7-ef93-4b96-a76c-2476f21795ae","Type":"ContainerDied","Data":"4d8c29e71c21a3ab92dc60d0b4d6da588df8f7c85dce7dcda5be210ad42005de"} Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.119379 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.136678 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.136816 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.142394 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.145865 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kfb99" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.151544 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.153637 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.179319 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vtt4z" podStartSLOduration=3.179292738 podStartE2EDuration="3.179292738s" podCreationTimestamp="2026-03-12 13:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:31:23.158683042 +0000 UTC m=+1301.607378428" watchObservedRunningTime="2026-03-12 13:31:23.179292738 +0000 UTC m=+1301.627988134" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.313061 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c01f943c-e09c-4727-8cf7-eec58a56b363-lock\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.313178 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c01f943c-e09c-4727-8cf7-eec58a56b363-cache\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.313220 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.313246 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.313266 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01f943c-e09c-4727-8cf7-eec58a56b363-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.313524 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltnr\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-kube-api-access-xltnr\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.414918 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c01f943c-e09c-4727-8cf7-eec58a56b363-lock\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.415058 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c01f943c-e09c-4727-8cf7-eec58a56b363-cache\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.415105 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.415130 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.415164 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01f943c-e09c-4727-8cf7-eec58a56b363-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.415201 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xltnr\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-kube-api-access-xltnr\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: E0312 13:31:23.417022 4778 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 13:31:23 crc kubenswrapper[4778]: E0312 13:31:23.417050 4778 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 13:31:23 crc kubenswrapper[4778]: E0312 13:31:23.417099 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift podName:c01f943c-e09c-4727-8cf7-eec58a56b363 nodeName:}" failed. No retries permitted until 2026-03-12 13:31:23.917083119 +0000 UTC m=+1302.365778515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift") pod "swift-storage-0" (UID: "c01f943c-e09c-4727-8cf7-eec58a56b363") : configmap "swift-ring-files" not found Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.417101 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c01f943c-e09c-4727-8cf7-eec58a56b363-lock\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.417268 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") device mount path \"/mnt/openstack/pv18\"" pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.417629 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c01f943c-e09c-4727-8cf7-eec58a56b363-cache\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.422874 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01f943c-e09c-4727-8cf7-eec58a56b363-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.432193 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xltnr\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-kube-api-access-xltnr\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.437994 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.562261 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5zdpc"] Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.563380 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.573320 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.573526 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.573632 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.573957 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5zdpc"] Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.603706 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-5zdpc"] Mar 12 13:31:23 crc kubenswrapper[4778]: E0312 13:31:23.604332 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-wjwff ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-wjwff ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-5zdpc" podUID="55dae060-74ee-451e-9352-daec701140b2" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.614327 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5knbg"] Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.615543 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.625858 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5knbg"] Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.720656 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-combined-ca-bundle\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.720702 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-dispersionconf\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.720723 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-dispersionconf\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.720748 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-swiftconf\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.720871 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzct\" (UniqueName: \"kubernetes.io/projected/2edc2c90-f91e-402d-809c-514e9d8a5e04-kube-api-access-jlzct\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.720910 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55dae060-74ee-451e-9352-daec701140b2-etc-swift\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.720973 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-scripts\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.721083 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-scripts\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.721188 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2edc2c90-f91e-402d-809c-514e9d8a5e04-etc-swift\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.721329 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-ring-data-devices\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.721366 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-ring-data-devices\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.721450 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-swiftconf\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.721471 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-combined-ca-bundle\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.721503 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwff\" (UniqueName: \"kubernetes.io/projected/55dae060-74ee-451e-9352-daec701140b2-kube-api-access-wjwff\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.823041 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-scripts\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.823104 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2edc2c90-f91e-402d-809c-514e9d8a5e04-etc-swift\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.823143 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-ring-data-devices\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.823163 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-ring-data-devices\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.823197 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-swiftconf\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.823850 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2edc2c90-f91e-402d-809c-514e9d8a5e04-etc-swift\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824040 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-scripts\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824053 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-ring-data-devices\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824051 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-ring-data-devices\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824096 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-combined-ca-bundle\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824167 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwff\" (UniqueName: \"kubernetes.io/projected/55dae060-74ee-451e-9352-daec701140b2-kube-api-access-wjwff\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824637 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-combined-ca-bundle\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824712 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-dispersionconf\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824752 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-dispersionconf\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824816 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-swiftconf\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824874 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55dae060-74ee-451e-9352-daec701140b2-etc-swift\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824900 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlzct\" (UniqueName: \"kubernetes.io/projected/2edc2c90-f91e-402d-809c-514e9d8a5e04-kube-api-access-jlzct\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.824942 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-scripts\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.825164 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55dae060-74ee-451e-9352-daec701140b2-etc-swift\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.825628 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-scripts\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.827758 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-swiftconf\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.828814 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-dispersionconf\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.828833 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-dispersionconf\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.847940 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-combined-ca-bundle\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.848287 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-swiftconf\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.848356 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-combined-ca-bundle\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.851747 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlzct\" (UniqueName: \"kubernetes.io/projected/2edc2c90-f91e-402d-809c-514e9d8a5e04-kube-api-access-jlzct\") pod \"swift-ring-rebalance-5knbg\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.859356 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwff\" (UniqueName: \"kubernetes.io/projected/55dae060-74ee-451e-9352-daec701140b2-kube-api-access-wjwff\") pod \"swift-ring-rebalance-5zdpc\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.926736 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:23 crc kubenswrapper[4778]: E0312 13:31:23.927021 4778 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 13:31:23 crc kubenswrapper[4778]: E0312 13:31:23.927035 4778 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 13:31:23 crc kubenswrapper[4778]: E0312 13:31:23.927080 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift podName:c01f943c-e09c-4727-8cf7-eec58a56b363 nodeName:}" failed. No retries permitted until 2026-03-12 13:31:24.9270658 +0000 UTC m=+1303.375761196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift") pod "swift-storage-0" (UID: "c01f943c-e09c-4727-8cf7-eec58a56b363") : configmap "swift-ring-files" not found Mar 12 13:31:23 crc kubenswrapper[4778]: I0312 13:31:23.936570 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:24 crc kubenswrapper[4778]: I0312 13:31:24.087132 4778 generic.go:334] "Generic (PLEG): container finished" podID="5dd405d8-c82b-49d0-a871-1c7c847638df" containerID="94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96" exitCode=0 Mar 12 13:31:24 crc kubenswrapper[4778]: I0312 13:31:24.088335 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8rkss" event={"ID":"5dd405d8-c82b-49d0-a871-1c7c847638df","Type":"ContainerDied","Data":"94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96"} Mar 12 13:31:24 crc kubenswrapper[4778]: I0312 13:31:24.088672 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:24 crc kubenswrapper[4778]: I0312 13:31:24.112954 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.233598 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-swiftconf\") pod \"55dae060-74ee-451e-9352-daec701140b2\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.233661 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-scripts\") pod \"55dae060-74ee-451e-9352-daec701140b2\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.233703 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-combined-ca-bundle\") pod \"55dae060-74ee-451e-9352-daec701140b2\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.233725 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55dae060-74ee-451e-9352-daec701140b2-etc-swift\") pod \"55dae060-74ee-451e-9352-daec701140b2\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.233752 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-ring-data-devices\") pod \"55dae060-74ee-451e-9352-daec701140b2\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.233779 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjwff\" (UniqueName: \"kubernetes.io/projected/55dae060-74ee-451e-9352-daec701140b2-kube-api-access-wjwff\") pod \"55dae060-74ee-451e-9352-daec701140b2\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.233806 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-dispersionconf\") pod \"55dae060-74ee-451e-9352-daec701140b2\" (UID: \"55dae060-74ee-451e-9352-daec701140b2\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.234268 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-scripts" (OuterVolumeSpecName: "scripts") pod "55dae060-74ee-451e-9352-daec701140b2" (UID: "55dae060-74ee-451e-9352-daec701140b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.236851 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "55dae060-74ee-451e-9352-daec701140b2" (UID: "55dae060-74ee-451e-9352-daec701140b2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.252324 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55dae060-74ee-451e-9352-daec701140b2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "55dae060-74ee-451e-9352-daec701140b2" (UID: "55dae060-74ee-451e-9352-daec701140b2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.253918 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55dae060-74ee-451e-9352-daec701140b2" (UID: "55dae060-74ee-451e-9352-daec701140b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.254631 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "55dae060-74ee-451e-9352-daec701140b2" (UID: "55dae060-74ee-451e-9352-daec701140b2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.259692 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "55dae060-74ee-451e-9352-daec701140b2" (UID: "55dae060-74ee-451e-9352-daec701140b2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.268823 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55dae060-74ee-451e-9352-daec701140b2-kube-api-access-wjwff" (OuterVolumeSpecName: "kube-api-access-wjwff") pod "55dae060-74ee-451e-9352-daec701140b2" (UID: "55dae060-74ee-451e-9352-daec701140b2"). InnerVolumeSpecName "kube-api-access-wjwff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.337799 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.337839 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.337854 4778 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55dae060-74ee-451e-9352-daec701140b2-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.337865 4778 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55dae060-74ee-451e-9352-daec701140b2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.337877 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjwff\" (UniqueName: \"kubernetes.io/projected/55dae060-74ee-451e-9352-daec701140b2-kube-api-access-wjwff\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.337889 4778 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.337899 4778 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55dae060-74ee-451e-9352-daec701140b2-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:24.947367 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:24.947616 4778 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:24.947892 4778 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:24.947973 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift podName:c01f943c-e09c-4727-8cf7-eec58a56b363 nodeName:}" failed. No retries permitted until 2026-03-12 13:31:26.947949716 +0000 UTC m=+1305.396645122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift") pod "swift-storage-0" (UID: "c01f943c-e09c-4727-8cf7-eec58a56b363") : configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:25.096726 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5zdpc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:25.150232 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-5zdpc"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:25.156777 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-5zdpc"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.243808 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vsbqv"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.244963 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.247225 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.263021 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55dae060-74ee-451e-9352-daec701140b2" path="/var/lib/kubelet/pods/55dae060-74ee-451e-9352-daec701140b2/volumes" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.263443 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vsbqv"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.386960 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/7dba49cb-b897-4877-83f4-72e0d731a1b1-kube-api-access-bkg2s\") pod \"root-account-create-update-vsbqv\" (UID: \"7dba49cb-b897-4877-83f4-72e0d731a1b1\") " pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.387068 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dba49cb-b897-4877-83f4-72e0d731a1b1-operator-scripts\") pod \"root-account-create-update-vsbqv\" (UID: \"7dba49cb-b897-4877-83f4-72e0d731a1b1\") " pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.489410 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/7dba49cb-b897-4877-83f4-72e0d731a1b1-kube-api-access-bkg2s\") pod \"root-account-create-update-vsbqv\" (UID: \"7dba49cb-b897-4877-83f4-72e0d731a1b1\") " pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.489476 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dba49cb-b897-4877-83f4-72e0d731a1b1-operator-scripts\") pod \"root-account-create-update-vsbqv\" (UID: \"7dba49cb-b897-4877-83f4-72e0d731a1b1\") " pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.490416 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dba49cb-b897-4877-83f4-72e0d731a1b1-operator-scripts\") pod \"root-account-create-update-vsbqv\" (UID: \"7dba49cb-b897-4877-83f4-72e0d731a1b1\") " pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.510736 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/7dba49cb-b897-4877-83f4-72e0d731a1b1-kube-api-access-bkg2s\") pod \"root-account-create-update-vsbqv\" (UID: \"7dba49cb-b897-4877-83f4-72e0d731a1b1\") " pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.599436 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:26.998963 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:26.999176 4778 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:26.999216 4778 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:26.999281 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift podName:c01f943c-e09c-4727-8cf7-eec58a56b363 nodeName:}" failed. No retries permitted until 2026-03-12 13:31:30.999258953 +0000 UTC m=+1309.447954349 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift") pod "swift-storage-0" (UID: "c01f943c-e09c-4727-8cf7-eec58a56b363") : configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:29.141219 4778 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 12 13:31:31 crc kubenswrapper[4778]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/124fc095-41fd-4e2d-86a1-0aada5c7447f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 12 13:31:31 crc kubenswrapper[4778]: > podSandboxID="a6bf49931f5dff6309127ba2cd8c8cdab17c7cf48d3d19393ec4e377e3b30b19" Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:29.141910 4778 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 12 13:31:31 crc kubenswrapper[4778]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5589g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-jsqnb_openstack(124fc095-41fd-4e2d-86a1-0aada5c7447f): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/124fc095-41fd-4e2d-86a1-0aada5c7447f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 12 13:31:31 crc kubenswrapper[4778]: > logger="UnhandledError" Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:29.142969 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/124fc095-41fd-4e2d-86a1-0aada5c7447f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" podUID="124fc095-41fd-4e2d-86a1-0aada5c7447f" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.273652 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gccjh" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.280235 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.440838 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmjgm\" (UniqueName: \"kubernetes.io/projected/199c7ab7-ef93-4b96-a76c-2476f21795ae-kube-api-access-jmjgm\") pod \"199c7ab7-ef93-4b96-a76c-2476f21795ae\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.440946 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-config\") pod \"199c7ab7-ef93-4b96-a76c-2476f21795ae\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.440978 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc051b32-4b28-4011-9a00-49caa730f074-operator-scripts\") pod \"fc051b32-4b28-4011-9a00-49caa730f074\" (UID: \"fc051b32-4b28-4011-9a00-49caa730f074\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.441076 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-ovsdbserver-nb\") pod \"199c7ab7-ef93-4b96-a76c-2476f21795ae\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.441137 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-dns-svc\") pod \"199c7ab7-ef93-4b96-a76c-2476f21795ae\" (UID: \"199c7ab7-ef93-4b96-a76c-2476f21795ae\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.441160 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfr4k\" (UniqueName: \"kubernetes.io/projected/fc051b32-4b28-4011-9a00-49caa730f074-kube-api-access-wfr4k\") pod \"fc051b32-4b28-4011-9a00-49caa730f074\" (UID: \"fc051b32-4b28-4011-9a00-49caa730f074\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.441921 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc051b32-4b28-4011-9a00-49caa730f074-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc051b32-4b28-4011-9a00-49caa730f074" (UID: "fc051b32-4b28-4011-9a00-49caa730f074"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.454469 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc051b32-4b28-4011-9a00-49caa730f074-kube-api-access-wfr4k" (OuterVolumeSpecName: "kube-api-access-wfr4k") pod "fc051b32-4b28-4011-9a00-49caa730f074" (UID: "fc051b32-4b28-4011-9a00-49caa730f074"). InnerVolumeSpecName "kube-api-access-wfr4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.454520 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199c7ab7-ef93-4b96-a76c-2476f21795ae-kube-api-access-jmjgm" (OuterVolumeSpecName: "kube-api-access-jmjgm") pod "199c7ab7-ef93-4b96-a76c-2476f21795ae" (UID: "199c7ab7-ef93-4b96-a76c-2476f21795ae"). InnerVolumeSpecName "kube-api-access-jmjgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.461402 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "199c7ab7-ef93-4b96-a76c-2476f21795ae" (UID: "199c7ab7-ef93-4b96-a76c-2476f21795ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.464523 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "199c7ab7-ef93-4b96-a76c-2476f21795ae" (UID: "199c7ab7-ef93-4b96-a76c-2476f21795ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.465018 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-config" (OuterVolumeSpecName: "config") pod "199c7ab7-ef93-4b96-a76c-2476f21795ae" (UID: "199c7ab7-ef93-4b96-a76c-2476f21795ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.544300 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmjgm\" (UniqueName: \"kubernetes.io/projected/199c7ab7-ef93-4b96-a76c-2476f21795ae-kube-api-access-jmjgm\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.544335 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.544349 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc051b32-4b28-4011-9a00-49caa730f074-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.544359 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.544370 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/199c7ab7-ef93-4b96-a76c-2476f21795ae-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.544380 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfr4k\" (UniqueName: \"kubernetes.io/projected/fc051b32-4b28-4011-9a00-49caa730f074-kube-api-access-wfr4k\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.602091 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-hpkvd"] Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:29.602715 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc051b32-4b28-4011-9a00-49caa730f074" containerName="mariadb-database-create" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.602740 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc051b32-4b28-4011-9a00-49caa730f074" containerName="mariadb-database-create" Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:29.602778 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199c7ab7-ef93-4b96-a76c-2476f21795ae" containerName="init" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.602790 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="199c7ab7-ef93-4b96-a76c-2476f21795ae" containerName="init" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.603043 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="199c7ab7-ef93-4b96-a76c-2476f21795ae" containerName="init" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.603065 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc051b32-4b28-4011-9a00-49caa730f074" containerName="mariadb-database-create" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.603939 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.615383 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hpkvd"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.701279 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3148-account-create-update-zkztc"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.703239 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.705737 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.713398 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3148-account-create-update-zkztc"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.748769 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9cwn\" (UniqueName: \"kubernetes.io/projected/18cd7d9a-1f17-4797-a94f-4692b1180508-kube-api-access-x9cwn\") pod \"glance-db-create-hpkvd\" (UID: \"18cd7d9a-1f17-4797-a94f-4692b1180508\") " pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.749138 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18cd7d9a-1f17-4797-a94f-4692b1180508-operator-scripts\") pod \"glance-db-create-hpkvd\" (UID: \"18cd7d9a-1f17-4797-a94f-4692b1180508\") " pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.851268 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gjr5\" (UniqueName: \"kubernetes.io/projected/e76971eb-34f0-4a33-b657-508e01eed5d1-kube-api-access-4gjr5\") pod \"glance-3148-account-create-update-zkztc\" (UID: \"e76971eb-34f0-4a33-b657-508e01eed5d1\") " pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.851353 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18cd7d9a-1f17-4797-a94f-4692b1180508-operator-scripts\") pod \"glance-db-create-hpkvd\" (UID: \"18cd7d9a-1f17-4797-a94f-4692b1180508\") " pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.851821 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e76971eb-34f0-4a33-b657-508e01eed5d1-operator-scripts\") pod \"glance-3148-account-create-update-zkztc\" (UID: \"e76971eb-34f0-4a33-b657-508e01eed5d1\") " pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.852089 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9cwn\" (UniqueName: \"kubernetes.io/projected/18cd7d9a-1f17-4797-a94f-4692b1180508-kube-api-access-x9cwn\") pod \"glance-db-create-hpkvd\" (UID: \"18cd7d9a-1f17-4797-a94f-4692b1180508\") " pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.852257 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18cd7d9a-1f17-4797-a94f-4692b1180508-operator-scripts\") pod \"glance-db-create-hpkvd\" (UID: \"18cd7d9a-1f17-4797-a94f-4692b1180508\") " pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.874096 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9cwn\" (UniqueName: \"kubernetes.io/projected/18cd7d9a-1f17-4797-a94f-4692b1180508-kube-api-access-x9cwn\") pod \"glance-db-create-hpkvd\" (UID: \"18cd7d9a-1f17-4797-a94f-4692b1180508\") " pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.928954 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.953354 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gjr5\" (UniqueName: \"kubernetes.io/projected/e76971eb-34f0-4a33-b657-508e01eed5d1-kube-api-access-4gjr5\") pod \"glance-3148-account-create-update-zkztc\" (UID: \"e76971eb-34f0-4a33-b657-508e01eed5d1\") " pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.953552 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e76971eb-34f0-4a33-b657-508e01eed5d1-operator-scripts\") pod \"glance-3148-account-create-update-zkztc\" (UID: \"e76971eb-34f0-4a33-b657-508e01eed5d1\") " pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.954559 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e76971eb-34f0-4a33-b657-508e01eed5d1-operator-scripts\") pod \"glance-3148-account-create-update-zkztc\" (UID: \"e76971eb-34f0-4a33-b657-508e01eed5d1\") " pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:29.973240 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gjr5\" (UniqueName: \"kubernetes.io/projected/e76971eb-34f0-4a33-b657-508e01eed5d1-kube-api-access-4gjr5\") pod \"glance-3148-account-create-update-zkztc\" (UID: \"e76971eb-34f0-4a33-b657-508e01eed5d1\") " pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.023826 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.142726 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8rkss" event={"ID":"5dd405d8-c82b-49d0-a871-1c7c847638df","Type":"ContainerStarted","Data":"cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb"} Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.144709 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gccjh" event={"ID":"fc051b32-4b28-4011-9a00-49caa730f074","Type":"ContainerDied","Data":"58e88409e0c9f35402bf7f7052b8bfab0894113d1308d624a95e9e78a692921f"} Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.144729 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58e88409e0c9f35402bf7f7052b8bfab0894113d1308d624a95e9e78a692921f" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.144780 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gccjh" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.154880 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.157034 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kzfk7" event={"ID":"199c7ab7-ef93-4b96-a76c-2476f21795ae","Type":"ContainerDied","Data":"5f1e3433b2aa51609fb7a612d3a57e0e4f8b5c1392d5055466098db076137d25"} Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.157116 4778 scope.go:117] "RemoveContainer" containerID="4d8c29e71c21a3ab92dc60d0b4d6da588df8f7c85dce7dcda5be210ad42005de" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.296303 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kzfk7"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.298411 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kzfk7"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.514588 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-79rjc"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.515845 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.524661 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-79rjc"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.609704 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6816-account-create-update-574cj"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.611448 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.615848 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.620292 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6816-account-create-update-574cj"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.682852 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-operator-scripts\") pod \"keystone-db-create-79rjc\" (UID: \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\") " pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.683185 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmjq\" (UniqueName: \"kubernetes.io/projected/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-kube-api-access-fnmjq\") pod \"keystone-db-create-79rjc\" (UID: \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\") " pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.786001 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d015b15d-96d2-4b95-9778-8f4175a840a1-operator-scripts\") pod \"keystone-6816-account-create-update-574cj\" (UID: \"d015b15d-96d2-4b95-9778-8f4175a840a1\") " pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.786217 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98q5d\" (UniqueName: \"kubernetes.io/projected/d015b15d-96d2-4b95-9778-8f4175a840a1-kube-api-access-98q5d\") pod \"keystone-6816-account-create-update-574cj\" (UID: \"d015b15d-96d2-4b95-9778-8f4175a840a1\") " pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.786310 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-operator-scripts\") pod \"keystone-db-create-79rjc\" (UID: \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\") " pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.786349 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmjq\" (UniqueName: \"kubernetes.io/projected/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-kube-api-access-fnmjq\") pod \"keystone-db-create-79rjc\" (UID: \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\") " pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.787242 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-operator-scripts\") pod \"keystone-db-create-79rjc\" (UID: \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\") " pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.814481 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmjq\" (UniqueName: \"kubernetes.io/projected/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-kube-api-access-fnmjq\") pod \"keystone-db-create-79rjc\" (UID: \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\") " pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.840783 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.887722 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d015b15d-96d2-4b95-9778-8f4175a840a1-operator-scripts\") pod \"keystone-6816-account-create-update-574cj\" (UID: \"d015b15d-96d2-4b95-9778-8f4175a840a1\") " pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.887821 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98q5d\" (UniqueName: \"kubernetes.io/projected/d015b15d-96d2-4b95-9778-8f4175a840a1-kube-api-access-98q5d\") pod \"keystone-6816-account-create-update-574cj\" (UID: \"d015b15d-96d2-4b95-9778-8f4175a840a1\") " pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.888806 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d015b15d-96d2-4b95-9778-8f4175a840a1-operator-scripts\") pod \"keystone-6816-account-create-update-574cj\" (UID: \"d015b15d-96d2-4b95-9778-8f4175a840a1\") " pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.903921 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98q5d\" (UniqueName: \"kubernetes.io/projected/d015b15d-96d2-4b95-9778-8f4175a840a1-kube-api-access-98q5d\") pod \"keystone-6816-account-create-update-574cj\" (UID: \"d015b15d-96d2-4b95-9778-8f4175a840a1\") " pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:30.930226 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.091383 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:31.091596 4778 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:31.091625 4778 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: E0312 13:31:31.091684 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift podName:c01f943c-e09c-4727-8cf7-eec58a56b363 nodeName:}" failed. No retries permitted until 2026-03-12 13:31:39.091664123 +0000 UTC m=+1317.540359519 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift") pod "swift-storage-0" (UID: "c01f943c-e09c-4727-8cf7-eec58a56b363") : configmap "swift-ring-files" not found Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.165425 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" event={"ID":"124fc095-41fd-4e2d-86a1-0aada5c7447f","Type":"ContainerStarted","Data":"8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d"} Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.166024 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.167399 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.195769 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" podStartSLOduration=10.631083928 podStartE2EDuration="11.195747933s" podCreationTimestamp="2026-03-12 13:31:20 +0000 UTC" firstStartedPulling="2026-03-12 13:31:21.600420575 +0000 UTC m=+1300.049115971" lastFinishedPulling="2026-03-12 13:31:22.16508458 +0000 UTC m=+1300.613779976" observedRunningTime="2026-03-12 13:31:31.186304604 +0000 UTC m=+1309.635000030" watchObservedRunningTime="2026-03-12 13:31:31.195747933 +0000 UTC m=+1309.644443339" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.215523 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-8rkss" podStartSLOduration=10.215503944 podStartE2EDuration="10.215503944s" podCreationTimestamp="2026-03-12 13:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:31:31.208737042 +0000 UTC m=+1309.657432458" watchObservedRunningTime="2026-03-12 13:31:31.215503944 +0000 UTC m=+1309.664199340" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.720527 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.908350 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b329f80-bb88-4c5c-91eb-24394cdcc492-operator-scripts\") pod \"7b329f80-bb88-4c5c-91eb-24394cdcc492\" (UID: \"7b329f80-bb88-4c5c-91eb-24394cdcc492\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.908465 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjpbn\" (UniqueName: \"kubernetes.io/projected/7b329f80-bb88-4c5c-91eb-24394cdcc492-kube-api-access-bjpbn\") pod \"7b329f80-bb88-4c5c-91eb-24394cdcc492\" (UID: \"7b329f80-bb88-4c5c-91eb-24394cdcc492\") " Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.909300 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b329f80-bb88-4c5c-91eb-24394cdcc492-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b329f80-bb88-4c5c-91eb-24394cdcc492" (UID: "7b329f80-bb88-4c5c-91eb-24394cdcc492"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.910956 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b329f80-bb88-4c5c-91eb-24394cdcc492-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.914976 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b329f80-bb88-4c5c-91eb-24394cdcc492-kube-api-access-bjpbn" (OuterVolumeSpecName: "kube-api-access-bjpbn") pod "7b329f80-bb88-4c5c-91eb-24394cdcc492" (UID: "7b329f80-bb88-4c5c-91eb-24394cdcc492"). InnerVolumeSpecName "kube-api-access-bjpbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.956463 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vsbqv"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.964007 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3148-account-create-update-zkztc"] Mar 12 13:31:31 crc kubenswrapper[4778]: W0312 13:31:31.975581 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode76971eb_34f0_4a33_b657_508e01eed5d1.slice/crio-ef3bc8062b86e00dd2d40134a2cf733c4aea86e6cbc2bdc709b2200ccc77ba9a WatchSource:0}: Error finding container ef3bc8062b86e00dd2d40134a2cf733c4aea86e6cbc2bdc709b2200ccc77ba9a: Status 404 returned error can't find the container with id ef3bc8062b86e00dd2d40134a2cf733c4aea86e6cbc2bdc709b2200ccc77ba9a Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.980770 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-79rjc"] Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.987333 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5knbg"] Mar 12 13:31:31 crc kubenswrapper[4778]: W0312 13:31:31.989177 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod280f8bcd_f8e0_451d_8c9c_b733f2b62a23.slice/crio-5cdd4d026622cdc64efa7747c4f6697d32618ba64ea2916988cc0aef60712b93 WatchSource:0}: Error finding container 5cdd4d026622cdc64efa7747c4f6697d32618ba64ea2916988cc0aef60712b93: Status 404 returned error can't find the container with id 5cdd4d026622cdc64efa7747c4f6697d32618ba64ea2916988cc0aef60712b93 Mar 12 13:31:31 crc kubenswrapper[4778]: I0312 13:31:31.994749 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6816-account-create-update-574cj"] Mar 12 13:31:31 crc kubenswrapper[4778]: W0312 13:31:31.997145 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2edc2c90_f91e_402d_809c_514e9d8a5e04.slice/crio-36959ba975d8524cefb7a8390e0e734aa9fdb408ce3b4f7319c4d627f50986f6 WatchSource:0}: Error finding container 36959ba975d8524cefb7a8390e0e734aa9fdb408ce3b4f7319c4d627f50986f6: Status 404 returned error can't find the container with id 36959ba975d8524cefb7a8390e0e734aa9fdb408ce3b4f7319c4d627f50986f6 Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.002627 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hpkvd"] Mar 12 13:31:32 crc kubenswrapper[4778]: W0312 13:31:32.008759 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18cd7d9a_1f17_4797_a94f_4692b1180508.slice/crio-89192c0d3b3df7a1a0c9315e55ae046a6770c86eb0446cd32a50a504ab553bed WatchSource:0}: Error finding container 89192c0d3b3df7a1a0c9315e55ae046a6770c86eb0446cd32a50a504ab553bed: Status 404 returned error can't find the container with id 89192c0d3b3df7a1a0c9315e55ae046a6770c86eb0446cd32a50a504ab553bed Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.011827 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjpbn\" (UniqueName: \"kubernetes.io/projected/7b329f80-bb88-4c5c-91eb-24394cdcc492-kube-api-access-bjpbn\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.179253 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3148-account-create-update-zkztc" event={"ID":"e76971eb-34f0-4a33-b657-508e01eed5d1","Type":"ContainerStarted","Data":"ef3bc8062b86e00dd2d40134a2cf733c4aea86e6cbc2bdc709b2200ccc77ba9a"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.181242 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vsbqv" event={"ID":"7dba49cb-b897-4877-83f4-72e0d731a1b1","Type":"ContainerStarted","Data":"f3f7a33c33e8b6e5c107976dcfe1137727c3f5d14f498dcea6e9df482aee564a"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.181282 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vsbqv" event={"ID":"7dba49cb-b897-4877-83f4-72e0d731a1b1","Type":"ContainerStarted","Data":"bedc9fba69e3f28bbfd7d4f69a0cadd95acf08c6bd39f0e86608134517a0bb5c"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.184573 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6816-account-create-update-574cj" event={"ID":"d015b15d-96d2-4b95-9778-8f4175a840a1","Type":"ContainerStarted","Data":"f4bb8c6e00b5e03bcc01c6649d1104fc5ef38426458fa36f98588fb6167dbe07"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.184640 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6816-account-create-update-574cj" event={"ID":"d015b15d-96d2-4b95-9778-8f4175a840a1","Type":"ContainerStarted","Data":"9bb61e64fae43b699f3efd9be919423ad2bf3a2faceb2aa570b27cb8710da6c0"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.186239 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-79rjc" event={"ID":"280f8bcd-f8e0-451d-8c9c-b733f2b62a23","Type":"ContainerStarted","Data":"5cdd4d026622cdc64efa7747c4f6697d32618ba64ea2916988cc0aef60712b93"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.188792 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5knbg" event={"ID":"2edc2c90-f91e-402d-809c-514e9d8a5e04","Type":"ContainerStarted","Data":"36959ba975d8524cefb7a8390e0e734aa9fdb408ce3b4f7319c4d627f50986f6"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.190936 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hpkvd" event={"ID":"18cd7d9a-1f17-4797-a94f-4692b1180508","Type":"ContainerStarted","Data":"89192c0d3b3df7a1a0c9315e55ae046a6770c86eb0446cd32a50a504ab553bed"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.193921 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b25f9c9-784a-4a52-9bb3-02c6c4592702","Type":"ContainerStarted","Data":"f8ecf9c7642de7714bf2b00724fe551c713a4b7cdd00502f8c955ee95a91b067"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.193960 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b25f9c9-784a-4a52-9bb3-02c6c4592702","Type":"ContainerStarted","Data":"1aabf1656062aeb27eb7042fd97a5654699971b77c5202dbaf7a6937510839a0"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.193975 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.199619 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-vsbqv" podStartSLOduration=6.199600865 podStartE2EDuration="6.199600865s" podCreationTimestamp="2026-03-12 13:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:31:32.198306728 +0000 UTC m=+1310.647002124" watchObservedRunningTime="2026-03-12 13:31:32.199600865 +0000 UTC m=+1310.648296261" Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.201226 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-886c-account-create-update-c7kqb" event={"ID":"7b329f80-bb88-4c5c-91eb-24394cdcc492","Type":"ContainerDied","Data":"4931bee01e54e487e69bca458cb118194d8730ef5cb3f2125d81402dd58d404b"} Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.201275 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4931bee01e54e487e69bca458cb118194d8730ef5cb3f2125d81402dd58d404b" Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.201251 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-886c-account-create-update-c7kqb" Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.231280 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.112331773 podStartE2EDuration="11.231256915s" podCreationTimestamp="2026-03-12 13:31:21 +0000 UTC" firstStartedPulling="2026-03-12 13:31:22.162365753 +0000 UTC m=+1300.611061149" lastFinishedPulling="2026-03-12 13:31:31.281290875 +0000 UTC m=+1309.729986291" observedRunningTime="2026-03-12 13:31:32.217114223 +0000 UTC m=+1310.665809639" watchObservedRunningTime="2026-03-12 13:31:32.231256915 +0000 UTC m=+1310.679952501" Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.260683 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-79rjc" podStartSLOduration=2.260669311 podStartE2EDuration="2.260669311s" podCreationTimestamp="2026-03-12 13:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:31:32.247997641 +0000 UTC m=+1310.696693037" watchObservedRunningTime="2026-03-12 13:31:32.260669311 +0000 UTC m=+1310.709364697" Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.274911 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6816-account-create-update-574cj" podStartSLOduration=2.274893216 podStartE2EDuration="2.274893216s" podCreationTimestamp="2026-03-12 13:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:31:32.269442621 +0000 UTC m=+1310.718138017" watchObservedRunningTime="2026-03-12 13:31:32.274893216 +0000 UTC m=+1310.723588612" Mar 12 13:31:32 crc kubenswrapper[4778]: I0312 13:31:32.298115 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199c7ab7-ef93-4b96-a76c-2476f21795ae" path="/var/lib/kubelet/pods/199c7ab7-ef93-4b96-a76c-2476f21795ae/volumes" Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.209569 4778 generic.go:334] "Generic (PLEG): container finished" podID="7dba49cb-b897-4877-83f4-72e0d731a1b1" containerID="f3f7a33c33e8b6e5c107976dcfe1137727c3f5d14f498dcea6e9df482aee564a" exitCode=0 Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.209645 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vsbqv" event={"ID":"7dba49cb-b897-4877-83f4-72e0d731a1b1","Type":"ContainerDied","Data":"f3f7a33c33e8b6e5c107976dcfe1137727c3f5d14f498dcea6e9df482aee564a"} Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.212417 4778 generic.go:334] "Generic (PLEG): container finished" podID="d015b15d-96d2-4b95-9778-8f4175a840a1" containerID="f4bb8c6e00b5e03bcc01c6649d1104fc5ef38426458fa36f98588fb6167dbe07" exitCode=0 Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.212551 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6816-account-create-update-574cj" event={"ID":"d015b15d-96d2-4b95-9778-8f4175a840a1","Type":"ContainerDied","Data":"f4bb8c6e00b5e03bcc01c6649d1104fc5ef38426458fa36f98588fb6167dbe07"} Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.214799 4778 generic.go:334] "Generic (PLEG): container finished" podID="280f8bcd-f8e0-451d-8c9c-b733f2b62a23" containerID="8443f2894188b4c3d976d78d2d647409527ab07f04b215d8b647fc560059ba2f" exitCode=0 Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.214896 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-79rjc" event={"ID":"280f8bcd-f8e0-451d-8c9c-b733f2b62a23","Type":"ContainerDied","Data":"8443f2894188b4c3d976d78d2d647409527ab07f04b215d8b647fc560059ba2f"} Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.215974 4778 generic.go:334] "Generic (PLEG): container finished" podID="18cd7d9a-1f17-4797-a94f-4692b1180508" containerID="451301ebd2071510b670f3a924d5fcd2f28fbcc4aa60d4224906bca0e09aa5be" exitCode=0 Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.216022 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hpkvd" event={"ID":"18cd7d9a-1f17-4797-a94f-4692b1180508","Type":"ContainerDied","Data":"451301ebd2071510b670f3a924d5fcd2f28fbcc4aa60d4224906bca0e09aa5be"} Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.219948 4778 generic.go:334] "Generic (PLEG): container finished" podID="e76971eb-34f0-4a33-b657-508e01eed5d1" containerID="5106184b767437cea31a6a61b3a1991b36587ddd28250ecc1207af703f368fda" exitCode=0 Mar 12 13:31:33 crc kubenswrapper[4778]: I0312 13:31:33.220042 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3148-account-create-update-zkztc" event={"ID":"e76971eb-34f0-4a33-b657-508e01eed5d1","Type":"ContainerDied","Data":"5106184b767437cea31a6a61b3a1991b36587ddd28250ecc1207af703f368fda"} Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.159204 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.167147 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.189674 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dba49cb-b897-4877-83f4-72e0d731a1b1-operator-scripts\") pod \"7dba49cb-b897-4877-83f4-72e0d731a1b1\" (UID: \"7dba49cb-b897-4877-83f4-72e0d731a1b1\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.189800 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/7dba49cb-b897-4877-83f4-72e0d731a1b1-kube-api-access-bkg2s\") pod \"7dba49cb-b897-4877-83f4-72e0d731a1b1\" (UID: \"7dba49cb-b897-4877-83f4-72e0d731a1b1\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.189835 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.194826 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.194849 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dba49cb-b897-4877-83f4-72e0d731a1b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dba49cb-b897-4877-83f4-72e0d731a1b1" (UID: "7dba49cb-b897-4877-83f4-72e0d731a1b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.207216 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dba49cb-b897-4877-83f4-72e0d731a1b1-kube-api-access-bkg2s" (OuterVolumeSpecName: "kube-api-access-bkg2s") pod "7dba49cb-b897-4877-83f4-72e0d731a1b1" (UID: "7dba49cb-b897-4877-83f4-72e0d731a1b1"). InnerVolumeSpecName "kube-api-access-bkg2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.211162 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.261619 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3148-account-create-update-zkztc" event={"ID":"e76971eb-34f0-4a33-b657-508e01eed5d1","Type":"ContainerDied","Data":"ef3bc8062b86e00dd2d40134a2cf733c4aea86e6cbc2bdc709b2200ccc77ba9a"} Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.261663 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef3bc8062b86e00dd2d40134a2cf733c4aea86e6cbc2bdc709b2200ccc77ba9a" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.261772 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3148-account-create-update-zkztc" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.263493 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vsbqv" event={"ID":"7dba49cb-b897-4877-83f4-72e0d731a1b1","Type":"ContainerDied","Data":"bedc9fba69e3f28bbfd7d4f69a0cadd95acf08c6bd39f0e86608134517a0bb5c"} Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.263517 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bedc9fba69e3f28bbfd7d4f69a0cadd95acf08c6bd39f0e86608134517a0bb5c" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.263583 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vsbqv" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.274156 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6816-account-create-update-574cj" event={"ID":"d015b15d-96d2-4b95-9778-8f4175a840a1","Type":"ContainerDied","Data":"9bb61e64fae43b699f3efd9be919423ad2bf3a2faceb2aa570b27cb8710da6c0"} Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.274244 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb61e64fae43b699f3efd9be919423ad2bf3a2faceb2aa570b27cb8710da6c0" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.274295 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6816-account-create-update-574cj" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.276118 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-79rjc" event={"ID":"280f8bcd-f8e0-451d-8c9c-b733f2b62a23","Type":"ContainerDied","Data":"5cdd4d026622cdc64efa7747c4f6697d32618ba64ea2916988cc0aef60712b93"} Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.276140 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-79rjc" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.276142 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cdd4d026622cdc64efa7747c4f6697d32618ba64ea2916988cc0aef60712b93" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.277144 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hpkvd" event={"ID":"18cd7d9a-1f17-4797-a94f-4692b1180508","Type":"ContainerDied","Data":"89192c0d3b3df7a1a0c9315e55ae046a6770c86eb0446cd32a50a504ab553bed"} Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.277163 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89192c0d3b3df7a1a0c9315e55ae046a6770c86eb0446cd32a50a504ab553bed" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.277167 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hpkvd" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.292918 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gjr5\" (UniqueName: \"kubernetes.io/projected/e76971eb-34f0-4a33-b657-508e01eed5d1-kube-api-access-4gjr5\") pod \"e76971eb-34f0-4a33-b657-508e01eed5d1\" (UID: \"e76971eb-34f0-4a33-b657-508e01eed5d1\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.292992 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnmjq\" (UniqueName: \"kubernetes.io/projected/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-kube-api-access-fnmjq\") pod \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\" (UID: \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.293030 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18cd7d9a-1f17-4797-a94f-4692b1180508-operator-scripts\") pod \"18cd7d9a-1f17-4797-a94f-4692b1180508\" (UID: \"18cd7d9a-1f17-4797-a94f-4692b1180508\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.293106 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9cwn\" (UniqueName: \"kubernetes.io/projected/18cd7d9a-1f17-4797-a94f-4692b1180508-kube-api-access-x9cwn\") pod \"18cd7d9a-1f17-4797-a94f-4692b1180508\" (UID: \"18cd7d9a-1f17-4797-a94f-4692b1180508\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.293143 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98q5d\" (UniqueName: \"kubernetes.io/projected/d015b15d-96d2-4b95-9778-8f4175a840a1-kube-api-access-98q5d\") pod \"d015b15d-96d2-4b95-9778-8f4175a840a1\" (UID: \"d015b15d-96d2-4b95-9778-8f4175a840a1\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.293591 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18cd7d9a-1f17-4797-a94f-4692b1180508-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18cd7d9a-1f17-4797-a94f-4692b1180508" (UID: "18cd7d9a-1f17-4797-a94f-4692b1180508"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.296741 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-operator-scripts\") pod \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\" (UID: \"280f8bcd-f8e0-451d-8c9c-b733f2b62a23\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.296822 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e76971eb-34f0-4a33-b657-508e01eed5d1-operator-scripts\") pod \"e76971eb-34f0-4a33-b657-508e01eed5d1\" (UID: \"e76971eb-34f0-4a33-b657-508e01eed5d1\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.296806 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d015b15d-96d2-4b95-9778-8f4175a840a1-kube-api-access-98q5d" (OuterVolumeSpecName: "kube-api-access-98q5d") pod "d015b15d-96d2-4b95-9778-8f4175a840a1" (UID: "d015b15d-96d2-4b95-9778-8f4175a840a1"). InnerVolumeSpecName "kube-api-access-98q5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.296860 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d015b15d-96d2-4b95-9778-8f4175a840a1-operator-scripts\") pod \"d015b15d-96d2-4b95-9778-8f4175a840a1\" (UID: \"d015b15d-96d2-4b95-9778-8f4175a840a1\") " Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.297266 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "280f8bcd-f8e0-451d-8c9c-b733f2b62a23" (UID: "280f8bcd-f8e0-451d-8c9c-b733f2b62a23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.297279 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e76971eb-34f0-4a33-b657-508e01eed5d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e76971eb-34f0-4a33-b657-508e01eed5d1" (UID: "e76971eb-34f0-4a33-b657-508e01eed5d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.297347 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d015b15d-96d2-4b95-9778-8f4175a840a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d015b15d-96d2-4b95-9778-8f4175a840a1" (UID: "d015b15d-96d2-4b95-9778-8f4175a840a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.297505 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e76971eb-34f0-4a33-b657-508e01eed5d1-kube-api-access-4gjr5" (OuterVolumeSpecName: "kube-api-access-4gjr5") pod "e76971eb-34f0-4a33-b657-508e01eed5d1" (UID: "e76971eb-34f0-4a33-b657-508e01eed5d1"). InnerVolumeSpecName "kube-api-access-4gjr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298521 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298577 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dba49cb-b897-4877-83f4-72e0d731a1b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298596 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e76971eb-34f0-4a33-b657-508e01eed5d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298610 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d015b15d-96d2-4b95-9778-8f4175a840a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298623 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkg2s\" (UniqueName: \"kubernetes.io/projected/7dba49cb-b897-4877-83f4-72e0d731a1b1-kube-api-access-bkg2s\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298679 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gjr5\" (UniqueName: \"kubernetes.io/projected/e76971eb-34f0-4a33-b657-508e01eed5d1-kube-api-access-4gjr5\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298704 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18cd7d9a-1f17-4797-a94f-4692b1180508-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298751 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98q5d\" (UniqueName: \"kubernetes.io/projected/d015b15d-96d2-4b95-9778-8f4175a840a1-kube-api-access-98q5d\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.298874 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-kube-api-access-fnmjq" (OuterVolumeSpecName: "kube-api-access-fnmjq") pod "280f8bcd-f8e0-451d-8c9c-b733f2b62a23" (UID: "280f8bcd-f8e0-451d-8c9c-b733f2b62a23"). InnerVolumeSpecName "kube-api-access-fnmjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.300393 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18cd7d9a-1f17-4797-a94f-4692b1180508-kube-api-access-x9cwn" (OuterVolumeSpecName: "kube-api-access-x9cwn") pod "18cd7d9a-1f17-4797-a94f-4692b1180508" (UID: "18cd7d9a-1f17-4797-a94f-4692b1180508"). InnerVolumeSpecName "kube-api-access-x9cwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.399560 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnmjq\" (UniqueName: \"kubernetes.io/projected/280f8bcd-f8e0-451d-8c9c-b733f2b62a23-kube-api-access-fnmjq\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:35 crc kubenswrapper[4778]: I0312 13:31:35.399629 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9cwn\" (UniqueName: \"kubernetes.io/projected/18cd7d9a-1f17-4797-a94f-4692b1180508-kube-api-access-x9cwn\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:36 crc kubenswrapper[4778]: I0312 13:31:36.286382 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5knbg" event={"ID":"2edc2c90-f91e-402d-809c-514e9d8a5e04","Type":"ContainerStarted","Data":"112c5296361c82469b890fc71a2c6b309a06a72b7d67b5062a1ead56745507c2"} Mar 12 13:31:36 crc kubenswrapper[4778]: I0312 13:31:36.305340 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5knbg" podStartSLOduration=10.293351913 podStartE2EDuration="13.305315174s" podCreationTimestamp="2026-03-12 13:31:23 +0000 UTC" firstStartedPulling="2026-03-12 13:31:31.999566767 +0000 UTC m=+1310.448262173" lastFinishedPulling="2026-03-12 13:31:35.011530028 +0000 UTC m=+1313.460225434" observedRunningTime="2026-03-12 13:31:36.301244078 +0000 UTC m=+1314.749939484" watchObservedRunningTime="2026-03-12 13:31:36.305315174 +0000 UTC m=+1314.754010570" Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.364409 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.422501 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jsqnb"] Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.422770 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" podUID="124fc095-41fd-4e2d-86a1-0aada5c7447f" containerName="dnsmasq-dns" containerID="cri-o://8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d" gracePeriod=10 Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.424452 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.746073 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vsbqv"] Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.755500 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vsbqv"] Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.849357 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.939930 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-config\") pod \"124fc095-41fd-4e2d-86a1-0aada5c7447f\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.940245 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-nb\") pod \"124fc095-41fd-4e2d-86a1-0aada5c7447f\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.940466 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-dns-svc\") pod \"124fc095-41fd-4e2d-86a1-0aada5c7447f\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.940884 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5589g\" (UniqueName: \"kubernetes.io/projected/124fc095-41fd-4e2d-86a1-0aada5c7447f-kube-api-access-5589g\") pod \"124fc095-41fd-4e2d-86a1-0aada5c7447f\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.944605 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-sb\") pod \"124fc095-41fd-4e2d-86a1-0aada5c7447f\" (UID: \"124fc095-41fd-4e2d-86a1-0aada5c7447f\") " Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.973511 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124fc095-41fd-4e2d-86a1-0aada5c7447f-kube-api-access-5589g" (OuterVolumeSpecName: "kube-api-access-5589g") pod "124fc095-41fd-4e2d-86a1-0aada5c7447f" (UID: "124fc095-41fd-4e2d-86a1-0aada5c7447f"). InnerVolumeSpecName "kube-api-access-5589g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.990719 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "124fc095-41fd-4e2d-86a1-0aada5c7447f" (UID: "124fc095-41fd-4e2d-86a1-0aada5c7447f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:37 crc kubenswrapper[4778]: I0312 13:31:37.993344 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "124fc095-41fd-4e2d-86a1-0aada5c7447f" (UID: "124fc095-41fd-4e2d-86a1-0aada5c7447f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.007643 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-config" (OuterVolumeSpecName: "config") pod "124fc095-41fd-4e2d-86a1-0aada5c7447f" (UID: "124fc095-41fd-4e2d-86a1-0aada5c7447f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.008341 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "124fc095-41fd-4e2d-86a1-0aada5c7447f" (UID: "124fc095-41fd-4e2d-86a1-0aada5c7447f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.052881 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.052927 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5589g\" (UniqueName: \"kubernetes.io/projected/124fc095-41fd-4e2d-86a1-0aada5c7447f-kube-api-access-5589g\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.052944 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.052957 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.052968 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/124fc095-41fd-4e2d-86a1-0aada5c7447f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.261981 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dba49cb-b897-4877-83f4-72e0d731a1b1" path="/var/lib/kubelet/pods/7dba49cb-b897-4877-83f4-72e0d731a1b1/volumes" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.301656 4778 generic.go:334] "Generic (PLEG): container finished" podID="124fc095-41fd-4e2d-86a1-0aada5c7447f" containerID="8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d" exitCode=0 Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.301705 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" event={"ID":"124fc095-41fd-4e2d-86a1-0aada5c7447f","Type":"ContainerDied","Data":"8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d"} Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.301720 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.301730 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jsqnb" event={"ID":"124fc095-41fd-4e2d-86a1-0aada5c7447f","Type":"ContainerDied","Data":"a6bf49931f5dff6309127ba2cd8c8cdab17c7cf48d3d19393ec4e377e3b30b19"} Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.301746 4778 scope.go:117] "RemoveContainer" containerID="8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.321561 4778 scope.go:117] "RemoveContainer" containerID="2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.328650 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jsqnb"] Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.334022 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jsqnb"] Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.338959 4778 scope.go:117] "RemoveContainer" containerID="8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d" Mar 12 13:31:38 crc kubenswrapper[4778]: E0312 13:31:38.340239 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d\": container with ID starting with 8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d not found: ID does not exist" containerID="8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.340290 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d"} err="failed to get container status \"8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d\": rpc error: code = NotFound desc = could not find container \"8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d\": container with ID starting with 8df5a4af2891ad2a0c3dc58f39282cbd926977260f1e62c77d95d627bea99c7d not found: ID does not exist" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.340316 4778 scope.go:117] "RemoveContainer" containerID="2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9" Mar 12 13:31:38 crc kubenswrapper[4778]: E0312 13:31:38.340575 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9\": container with ID starting with 2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9 not found: ID does not exist" containerID="2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9" Mar 12 13:31:38 crc kubenswrapper[4778]: I0312 13:31:38.340604 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9"} err="failed to get container status \"2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9\": rpc error: code = NotFound desc = could not find container \"2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9\": container with ID starting with 2c4c2a78bfb0c622d1943285b717a26c265f52cecb8566f14114c1ff4b03e4c9 not found: ID does not exist" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.174267 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.174544 4778 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.174574 4778 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.174646 4778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift podName:c01f943c-e09c-4727-8cf7-eec58a56b363 nodeName:}" failed. No retries permitted until 2026-03-12 13:31:55.174623028 +0000 UTC m=+1333.623318434 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift") pod "swift-storage-0" (UID: "c01f943c-e09c-4727-8cf7-eec58a56b363") : configmap "swift-ring-files" not found Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868283 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xg6z4"] Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.868771 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d015b15d-96d2-4b95-9778-8f4175a840a1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868796 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d015b15d-96d2-4b95-9778-8f4175a840a1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.868825 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18cd7d9a-1f17-4797-a94f-4692b1180508" containerName="mariadb-database-create" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868833 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="18cd7d9a-1f17-4797-a94f-4692b1180508" containerName="mariadb-database-create" Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.868851 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124fc095-41fd-4e2d-86a1-0aada5c7447f" containerName="init" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868860 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="124fc095-41fd-4e2d-86a1-0aada5c7447f" containerName="init" Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.868873 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280f8bcd-f8e0-451d-8c9c-b733f2b62a23" containerName="mariadb-database-create" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868884 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="280f8bcd-f8e0-451d-8c9c-b733f2b62a23" containerName="mariadb-database-create" Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.868897 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b329f80-bb88-4c5c-91eb-24394cdcc492" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868905 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b329f80-bb88-4c5c-91eb-24394cdcc492" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.868921 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124fc095-41fd-4e2d-86a1-0aada5c7447f" containerName="dnsmasq-dns" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868928 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="124fc095-41fd-4e2d-86a1-0aada5c7447f" containerName="dnsmasq-dns" Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.868951 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76971eb-34f0-4a33-b657-508e01eed5d1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868958 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76971eb-34f0-4a33-b657-508e01eed5d1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: E0312 13:31:39.868971 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dba49cb-b897-4877-83f4-72e0d731a1b1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.868979 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dba49cb-b897-4877-83f4-72e0d731a1b1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.869177 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76971eb-34f0-4a33-b657-508e01eed5d1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.869233 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="18cd7d9a-1f17-4797-a94f-4692b1180508" containerName="mariadb-database-create" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.869242 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d015b15d-96d2-4b95-9778-8f4175a840a1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.869255 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="124fc095-41fd-4e2d-86a1-0aada5c7447f" containerName="dnsmasq-dns" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.869267 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b329f80-bb88-4c5c-91eb-24394cdcc492" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.869278 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dba49cb-b897-4877-83f4-72e0d731a1b1" containerName="mariadb-account-create-update" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.869291 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="280f8bcd-f8e0-451d-8c9c-b733f2b62a23" containerName="mariadb-database-create" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.869859 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.873091 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.873114 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l7l5j" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.901011 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xg6z4"] Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.988222 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-config-data\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.988299 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-db-sync-config-data\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.988362 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-combined-ca-bundle\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:39 crc kubenswrapper[4778]: I0312 13:31:39.988420 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crgvn\" (UniqueName: \"kubernetes.io/projected/befeb973-a1de-48f9-8de0-5559f75472dc-kube-api-access-crgvn\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.090119 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-config-data\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.090207 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-db-sync-config-data\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.090269 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-combined-ca-bundle\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.090300 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crgvn\" (UniqueName: \"kubernetes.io/projected/befeb973-a1de-48f9-8de0-5559f75472dc-kube-api-access-crgvn\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.095494 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-db-sync-config-data\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.095667 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-combined-ca-bundle\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.095779 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-config-data\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.108431 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crgvn\" (UniqueName: \"kubernetes.io/projected/befeb973-a1de-48f9-8de0-5559f75472dc-kube-api-access-crgvn\") pod \"glance-db-sync-xg6z4\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.190058 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xg6z4" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.270506 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124fc095-41fd-4e2d-86a1-0aada5c7447f" path="/var/lib/kubelet/pods/124fc095-41fd-4e2d-86a1-0aada5c7447f/volumes" Mar 12 13:31:40 crc kubenswrapper[4778]: I0312 13:31:40.715366 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xg6z4"] Mar 12 13:31:40 crc kubenswrapper[4778]: W0312 13:31:40.736645 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefeb973_a1de_48f9_8de0_5559f75472dc.slice/crio-f4635ea2bc5d2d0cce58645ef33f0143795167ef564ca2829fbc3740cec61b52 WatchSource:0}: Error finding container f4635ea2bc5d2d0cce58645ef33f0143795167ef564ca2829fbc3740cec61b52: Status 404 returned error can't find the container with id f4635ea2bc5d2d0cce58645ef33f0143795167ef564ca2829fbc3740cec61b52 Mar 12 13:31:41 crc kubenswrapper[4778]: I0312 13:31:41.328831 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xg6z4" event={"ID":"befeb973-a1de-48f9-8de0-5559f75472dc","Type":"ContainerStarted","Data":"f4635ea2bc5d2d0cce58645ef33f0143795167ef564ca2829fbc3740cec61b52"} Mar 12 13:31:41 crc kubenswrapper[4778]: I0312 13:31:41.495174 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.344708 4778 generic.go:334] "Generic (PLEG): container finished" podID="2edc2c90-f91e-402d-809c-514e9d8a5e04" containerID="112c5296361c82469b890fc71a2c6b309a06a72b7d67b5062a1ead56745507c2" exitCode=0 Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.344763 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5knbg" event={"ID":"2edc2c90-f91e-402d-809c-514e9d8a5e04","Type":"ContainerDied","Data":"112c5296361c82469b890fc71a2c6b309a06a72b7d67b5062a1ead56745507c2"} Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.773307 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7kt6z"] Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.774736 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.776620 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.779166 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7kt6z"] Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.843939 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf8gm\" (UniqueName: \"kubernetes.io/projected/dd5a0cd9-113c-4313-8d66-90487bd90cd3-kube-api-access-wf8gm\") pod \"root-account-create-update-7kt6z\" (UID: \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\") " pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.844064 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5a0cd9-113c-4313-8d66-90487bd90cd3-operator-scripts\") pod \"root-account-create-update-7kt6z\" (UID: \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\") " pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.946104 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5a0cd9-113c-4313-8d66-90487bd90cd3-operator-scripts\") pod \"root-account-create-update-7kt6z\" (UID: \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\") " pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.946244 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf8gm\" (UniqueName: \"kubernetes.io/projected/dd5a0cd9-113c-4313-8d66-90487bd90cd3-kube-api-access-wf8gm\") pod \"root-account-create-update-7kt6z\" (UID: \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\") " pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.946936 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5a0cd9-113c-4313-8d66-90487bd90cd3-operator-scripts\") pod \"root-account-create-update-7kt6z\" (UID: \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\") " pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:42 crc kubenswrapper[4778]: I0312 13:31:42.964076 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf8gm\" (UniqueName: \"kubernetes.io/projected/dd5a0cd9-113c-4313-8d66-90487bd90cd3-kube-api-access-wf8gm\") pod \"root-account-create-update-7kt6z\" (UID: \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\") " pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.091962 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.539619 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7kt6z"] Mar 12 13:31:43 crc kubenswrapper[4778]: W0312 13:31:43.555205 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd5a0cd9_113c_4313_8d66_90487bd90cd3.slice/crio-052938a160ec540bb5a700bb99d2b7290aaa6a277baedbccdd9c5a8f16111da3 WatchSource:0}: Error finding container 052938a160ec540bb5a700bb99d2b7290aaa6a277baedbccdd9c5a8f16111da3: Status 404 returned error can't find the container with id 052938a160ec540bb5a700bb99d2b7290aaa6a277baedbccdd9c5a8f16111da3 Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.668410 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.760960 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-ring-data-devices\") pod \"2edc2c90-f91e-402d-809c-514e9d8a5e04\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.761537 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-dispersionconf\") pod \"2edc2c90-f91e-402d-809c-514e9d8a5e04\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.761625 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-scripts\") pod \"2edc2c90-f91e-402d-809c-514e9d8a5e04\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.761699 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlzct\" (UniqueName: \"kubernetes.io/projected/2edc2c90-f91e-402d-809c-514e9d8a5e04-kube-api-access-jlzct\") pod \"2edc2c90-f91e-402d-809c-514e9d8a5e04\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.761718 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-combined-ca-bundle\") pod \"2edc2c90-f91e-402d-809c-514e9d8a5e04\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.761773 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2edc2c90-f91e-402d-809c-514e9d8a5e04" (UID: "2edc2c90-f91e-402d-809c-514e9d8a5e04"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.761784 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2edc2c90-f91e-402d-809c-514e9d8a5e04-etc-swift\") pod \"2edc2c90-f91e-402d-809c-514e9d8a5e04\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.761868 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-swiftconf\") pod \"2edc2c90-f91e-402d-809c-514e9d8a5e04\" (UID: \"2edc2c90-f91e-402d-809c-514e9d8a5e04\") " Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.762475 4778 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.763172 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2edc2c90-f91e-402d-809c-514e9d8a5e04-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2edc2c90-f91e-402d-809c-514e9d8a5e04" (UID: "2edc2c90-f91e-402d-809c-514e9d8a5e04"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.770065 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2edc2c90-f91e-402d-809c-514e9d8a5e04-kube-api-access-jlzct" (OuterVolumeSpecName: "kube-api-access-jlzct") pod "2edc2c90-f91e-402d-809c-514e9d8a5e04" (UID: "2edc2c90-f91e-402d-809c-514e9d8a5e04"). InnerVolumeSpecName "kube-api-access-jlzct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.777619 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2edc2c90-f91e-402d-809c-514e9d8a5e04" (UID: "2edc2c90-f91e-402d-809c-514e9d8a5e04"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.792688 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-scripts" (OuterVolumeSpecName: "scripts") pod "2edc2c90-f91e-402d-809c-514e9d8a5e04" (UID: "2edc2c90-f91e-402d-809c-514e9d8a5e04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.800018 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2edc2c90-f91e-402d-809c-514e9d8a5e04" (UID: "2edc2c90-f91e-402d-809c-514e9d8a5e04"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.800779 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2edc2c90-f91e-402d-809c-514e9d8a5e04" (UID: "2edc2c90-f91e-402d-809c-514e9d8a5e04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.864608 4778 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.864655 4778 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.864672 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2edc2c90-f91e-402d-809c-514e9d8a5e04-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.864755 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlzct\" (UniqueName: \"kubernetes.io/projected/2edc2c90-f91e-402d-809c-514e9d8a5e04-kube-api-access-jlzct\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.864771 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edc2c90-f91e-402d-809c-514e9d8a5e04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:43 crc kubenswrapper[4778]: I0312 13:31:43.864785 4778 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2edc2c90-f91e-402d-809c-514e9d8a5e04-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:44 crc kubenswrapper[4778]: I0312 13:31:44.377362 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5knbg" event={"ID":"2edc2c90-f91e-402d-809c-514e9d8a5e04","Type":"ContainerDied","Data":"36959ba975d8524cefb7a8390e0e734aa9fdb408ce3b4f7319c4d627f50986f6"} Mar 12 13:31:44 crc kubenswrapper[4778]: I0312 13:31:44.377434 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36959ba975d8524cefb7a8390e0e734aa9fdb408ce3b4f7319c4d627f50986f6" Mar 12 13:31:44 crc kubenswrapper[4778]: I0312 13:31:44.377531 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5knbg" Mar 12 13:31:44 crc kubenswrapper[4778]: I0312 13:31:44.394137 4778 generic.go:334] "Generic (PLEG): container finished" podID="dd5a0cd9-113c-4313-8d66-90487bd90cd3" containerID="af7a0409b1470d33d558b70c98a397f0b5c99782ac9578ab1f379f9cb685947f" exitCode=0 Mar 12 13:31:44 crc kubenswrapper[4778]: I0312 13:31:44.395153 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7kt6z" event={"ID":"dd5a0cd9-113c-4313-8d66-90487bd90cd3","Type":"ContainerDied","Data":"af7a0409b1470d33d558b70c98a397f0b5c99782ac9578ab1f379f9cb685947f"} Mar 12 13:31:44 crc kubenswrapper[4778]: I0312 13:31:44.399814 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7kt6z" event={"ID":"dd5a0cd9-113c-4313-8d66-90487bd90cd3","Type":"ContainerStarted","Data":"052938a160ec540bb5a700bb99d2b7290aaa6a277baedbccdd9c5a8f16111da3"} Mar 12 13:31:44 crc kubenswrapper[4778]: I0312 13:31:44.797868 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4wct6" podUID="3b8efd1e-884d-4963-b69f-04ede0a92267" containerName="ovn-controller" probeResult="failure" output=< Mar 12 13:31:44 crc kubenswrapper[4778]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 13:31:44 crc kubenswrapper[4778]: > Mar 12 13:31:45 crc kubenswrapper[4778]: I0312 13:31:45.749836 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:45 crc kubenswrapper[4778]: I0312 13:31:45.796872 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf8gm\" (UniqueName: \"kubernetes.io/projected/dd5a0cd9-113c-4313-8d66-90487bd90cd3-kube-api-access-wf8gm\") pod \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\" (UID: \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\") " Mar 12 13:31:45 crc kubenswrapper[4778]: I0312 13:31:45.796947 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5a0cd9-113c-4313-8d66-90487bd90cd3-operator-scripts\") pod \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\" (UID: \"dd5a0cd9-113c-4313-8d66-90487bd90cd3\") " Mar 12 13:31:45 crc kubenswrapper[4778]: I0312 13:31:45.797792 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd5a0cd9-113c-4313-8d66-90487bd90cd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd5a0cd9-113c-4313-8d66-90487bd90cd3" (UID: "dd5a0cd9-113c-4313-8d66-90487bd90cd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:31:45 crc kubenswrapper[4778]: I0312 13:31:45.802010 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5a0cd9-113c-4313-8d66-90487bd90cd3-kube-api-access-wf8gm" (OuterVolumeSpecName: "kube-api-access-wf8gm") pod "dd5a0cd9-113c-4313-8d66-90487bd90cd3" (UID: "dd5a0cd9-113c-4313-8d66-90487bd90cd3"). InnerVolumeSpecName "kube-api-access-wf8gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:31:45 crc kubenswrapper[4778]: I0312 13:31:45.899303 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf8gm\" (UniqueName: \"kubernetes.io/projected/dd5a0cd9-113c-4313-8d66-90487bd90cd3-kube-api-access-wf8gm\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:45 crc kubenswrapper[4778]: I0312 13:31:45.899337 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd5a0cd9-113c-4313-8d66-90487bd90cd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:31:46 crc kubenswrapper[4778]: I0312 13:31:46.417332 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7kt6z" event={"ID":"dd5a0cd9-113c-4313-8d66-90487bd90cd3","Type":"ContainerDied","Data":"052938a160ec540bb5a700bb99d2b7290aaa6a277baedbccdd9c5a8f16111da3"} Mar 12 13:31:46 crc kubenswrapper[4778]: I0312 13:31:46.417738 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052938a160ec540bb5a700bb99d2b7290aaa6a277baedbccdd9c5a8f16111da3" Mar 12 13:31:46 crc kubenswrapper[4778]: I0312 13:31:46.417405 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7kt6z" Mar 12 13:31:49 crc kubenswrapper[4778]: I0312 13:31:49.444764 4778 generic.go:334] "Generic (PLEG): container finished" podID="1e89dfcc-2ac3-444c-91e8-56991eae096b" containerID="491cf83ea2b0803c619e4110e5a18dd9c9b6e2cc2bfd596357f59a6a18312dee" exitCode=0 Mar 12 13:31:49 crc kubenswrapper[4778]: I0312 13:31:49.444863 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1e89dfcc-2ac3-444c-91e8-56991eae096b","Type":"ContainerDied","Data":"491cf83ea2b0803c619e4110e5a18dd9c9b6e2cc2bfd596357f59a6a18312dee"} Mar 12 13:31:49 crc kubenswrapper[4778]: I0312 13:31:49.792250 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4wct6" podUID="3b8efd1e-884d-4963-b69f-04ede0a92267" containerName="ovn-controller" probeResult="failure" output=< Mar 12 13:31:49 crc kubenswrapper[4778]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 13:31:49 crc kubenswrapper[4778]: > Mar 12 13:31:49 crc kubenswrapper[4778]: I0312 13:31:49.813512 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:31:49 crc kubenswrapper[4778]: I0312 13:31:49.815170 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-p67vh" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.069720 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4wct6-config-6fwv4"] Mar 12 13:31:50 crc kubenswrapper[4778]: E0312 13:31:50.070441 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2edc2c90-f91e-402d-809c-514e9d8a5e04" containerName="swift-ring-rebalance" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.070471 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2edc2c90-f91e-402d-809c-514e9d8a5e04" containerName="swift-ring-rebalance" Mar 12 13:31:50 crc kubenswrapper[4778]: E0312 13:31:50.070485 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5a0cd9-113c-4313-8d66-90487bd90cd3" containerName="mariadb-account-create-update" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.070495 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5a0cd9-113c-4313-8d66-90487bd90cd3" containerName="mariadb-account-create-update" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.070774 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edc2c90-f91e-402d-809c-514e9d8a5e04" containerName="swift-ring-rebalance" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.070821 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5a0cd9-113c-4313-8d66-90487bd90cd3" containerName="mariadb-account-create-update" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.071962 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.074874 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.078394 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4wct6-config-6fwv4"] Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.186682 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-additional-scripts\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.186737 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-log-ovn\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.186764 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run-ovn\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.186861 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.186899 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2nc\" (UniqueName: \"kubernetes.io/projected/20836760-c025-4e65-bf24-34fc17f3c649-kube-api-access-4t2nc\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.187030 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-scripts\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.289036 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-scripts\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.289109 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-additional-scripts\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.289138 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-log-ovn\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.289153 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run-ovn\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.289196 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.289249 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t2nc\" (UniqueName: \"kubernetes.io/projected/20836760-c025-4e65-bf24-34fc17f3c649-kube-api-access-4t2nc\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.289673 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run-ovn\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.289691 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-log-ovn\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.290396 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-additional-scripts\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.290688 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.293927 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-scripts\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.335017 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t2nc\" (UniqueName: \"kubernetes.io/projected/20836760-c025-4e65-bf24-34fc17f3c649-kube-api-access-4t2nc\") pod \"ovn-controller-4wct6-config-6fwv4\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:50 crc kubenswrapper[4778]: I0312 13:31:50.391555 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:31:52 crc kubenswrapper[4778]: I0312 13:31:52.483169 4778 generic.go:334] "Generic (PLEG): container finished" podID="629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03" containerID="4fe9b07cd1599e91138683ca30e9da84b4bd93250ce15e01fd43967606252649" exitCode=0 Mar 12 13:31:52 crc kubenswrapper[4778]: I0312 13:31:52.483263 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03","Type":"ContainerDied","Data":"4fe9b07cd1599e91138683ca30e9da84b4bd93250ce15e01fd43967606252649"} Mar 12 13:31:54 crc kubenswrapper[4778]: I0312 13:31:54.791013 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4wct6" podUID="3b8efd1e-884d-4963-b69f-04ede0a92267" containerName="ovn-controller" probeResult="failure" output=< Mar 12 13:31:54 crc kubenswrapper[4778]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 12 13:31:54 crc kubenswrapper[4778]: > Mar 12 13:31:55 crc kubenswrapper[4778]: I0312 13:31:55.180819 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:55 crc kubenswrapper[4778]: I0312 13:31:55.192882 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c01f943c-e09c-4727-8cf7-eec58a56b363-etc-swift\") pod \"swift-storage-0\" (UID: \"c01f943c-e09c-4727-8cf7-eec58a56b363\") " pod="openstack/swift-storage-0" Mar 12 13:31:55 crc kubenswrapper[4778]: I0312 13:31:55.268078 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 12 13:31:56 crc kubenswrapper[4778]: E0312 13:31:56.599133 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Mar 12 13:31:56 crc kubenswrapper[4778]: E0312 13:31:56.599649 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crgvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xg6z4_openstack(befeb973-a1de-48f9-8de0-5559f75472dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:31:56 crc kubenswrapper[4778]: E0312 13:31:56.600968 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xg6z4" podUID="befeb973-a1de-48f9-8de0-5559f75472dc" Mar 12 13:31:57 crc kubenswrapper[4778]: W0312 13:31:57.134452 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20836760_c025_4e65_bf24_34fc17f3c649.slice/crio-17a996fa52ce91166da093384fbf949661bd1de6ff1a9aade6af20fcb9e834ba WatchSource:0}: Error finding container 17a996fa52ce91166da093384fbf949661bd1de6ff1a9aade6af20fcb9e834ba: Status 404 returned error can't find the container with id 17a996fa52ce91166da093384fbf949661bd1de6ff1a9aade6af20fcb9e834ba Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.147654 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4wct6-config-6fwv4"] Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.161441 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 12 13:31:57 crc kubenswrapper[4778]: W0312 13:31:57.163659 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc01f943c_e09c_4727_8cf7_eec58a56b363.slice/crio-0697d96e46b4a903431abbb68854c041327e624887e1614c134f5e36c81aaabd WatchSource:0}: Error finding container 0697d96e46b4a903431abbb68854c041327e624887e1614c134f5e36c81aaabd: Status 404 returned error can't find the container with id 0697d96e46b4a903431abbb68854c041327e624887e1614c134f5e36c81aaabd Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.529313 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03","Type":"ContainerStarted","Data":"69b48fd30717bc2fb32adf7bd553ace0508be9fb5806d39e1e63b2dff302e279"} Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.529674 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.531325 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1e89dfcc-2ac3-444c-91e8-56991eae096b","Type":"ContainerStarted","Data":"94375f4a2a4703567f32833cc33058ee531a8b1219141eaf7fa5f176bf09075b"} Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.531523 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.532818 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"0697d96e46b4a903431abbb68854c041327e624887e1614c134f5e36c81aaabd"} Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.534598 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4wct6-config-6fwv4" event={"ID":"20836760-c025-4e65-bf24-34fc17f3c649","Type":"ContainerStarted","Data":"f28e6c324c83dfc76a63ecc641dd7e634a485b3faa88e8b16a2e55fc0961b3a8"} Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.534634 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4wct6-config-6fwv4" event={"ID":"20836760-c025-4e65-bf24-34fc17f3c649","Type":"ContainerStarted","Data":"17a996fa52ce91166da093384fbf949661bd1de6ff1a9aade6af20fcb9e834ba"} Mar 12 13:31:57 crc kubenswrapper[4778]: E0312 13:31:57.535861 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xg6z4" podUID="befeb973-a1de-48f9-8de0-5559f75472dc" Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.559028 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371963.295773 podStartE2EDuration="1m13.559002997s" podCreationTimestamp="2026-03-12 13:30:44 +0000 UTC" firstStartedPulling="2026-03-12 13:30:46.696662484 +0000 UTC m=+1265.145357870" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:31:57.55595695 +0000 UTC m=+1336.004652366" watchObservedRunningTime="2026-03-12 13:31:57.559002997 +0000 UTC m=+1336.007698393" Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.582036 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4wct6-config-6fwv4" podStartSLOduration=7.582017931 podStartE2EDuration="7.582017931s" podCreationTimestamp="2026-03-12 13:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:31:57.574531418 +0000 UTC m=+1336.023226814" watchObservedRunningTime="2026-03-12 13:31:57.582017931 +0000 UTC m=+1336.030713327" Mar 12 13:31:57 crc kubenswrapper[4778]: I0312 13:31:57.635239 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.083803531 podStartE2EDuration="1m13.635220294s" podCreationTimestamp="2026-03-12 13:30:44 +0000 UTC" firstStartedPulling="2026-03-12 13:30:47.191925946 +0000 UTC m=+1265.640621342" lastFinishedPulling="2026-03-12 13:31:15.743342699 +0000 UTC m=+1294.192038105" observedRunningTime="2026-03-12 13:31:57.626612959 +0000 UTC m=+1336.075308355" watchObservedRunningTime="2026-03-12 13:31:57.635220294 +0000 UTC m=+1336.083915690" Mar 12 13:31:58 crc kubenswrapper[4778]: I0312 13:31:58.548814 4778 generic.go:334] "Generic (PLEG): container finished" podID="20836760-c025-4e65-bf24-34fc17f3c649" containerID="f28e6c324c83dfc76a63ecc641dd7e634a485b3faa88e8b16a2e55fc0961b3a8" exitCode=0 Mar 12 13:31:58 crc kubenswrapper[4778]: I0312 13:31:58.549111 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4wct6-config-6fwv4" event={"ID":"20836760-c025-4e65-bf24-34fc17f3c649","Type":"ContainerDied","Data":"f28e6c324c83dfc76a63ecc641dd7e634a485b3faa88e8b16a2e55fc0961b3a8"} Mar 12 13:31:59 crc kubenswrapper[4778]: I0312 13:31:59.558304 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"e51fda036eef5819395ec6d3fd22dab55d18db138a7f80e940b67e8bd1c9c47e"} Mar 12 13:31:59 crc kubenswrapper[4778]: I0312 13:31:59.558347 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"351be818ddd688aa88eb9700af5b628218d060ad816d893916ac63f3c5333bda"} Mar 12 13:31:59 crc kubenswrapper[4778]: I0312 13:31:59.558359 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"4599c0849140ef045537c5194ed1a3292aa5e8691910d4d423b2a3c0ad5ec9eb"} Mar 12 13:31:59 crc kubenswrapper[4778]: I0312 13:31:59.558369 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"b32340b64ad970ce89b19270074c18e85d0cbf22f3d3f70d70250fdd561c2684"} Mar 12 13:31:59 crc kubenswrapper[4778]: I0312 13:31:59.830940 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4wct6" Mar 12 13:31:59 crc kubenswrapper[4778]: I0312 13:31:59.913851 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.084595 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t2nc\" (UniqueName: \"kubernetes.io/projected/20836760-c025-4e65-bf24-34fc17f3c649-kube-api-access-4t2nc\") pod \"20836760-c025-4e65-bf24-34fc17f3c649\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.084754 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-log-ovn\") pod \"20836760-c025-4e65-bf24-34fc17f3c649\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.084843 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-scripts\") pod \"20836760-c025-4e65-bf24-34fc17f3c649\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.084875 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-additional-scripts\") pod \"20836760-c025-4e65-bf24-34fc17f3c649\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.084916 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run\") pod \"20836760-c025-4e65-bf24-34fc17f3c649\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.084936 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run-ovn\") pod \"20836760-c025-4e65-bf24-34fc17f3c649\" (UID: \"20836760-c025-4e65-bf24-34fc17f3c649\") " Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.085363 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "20836760-c025-4e65-bf24-34fc17f3c649" (UID: "20836760-c025-4e65-bf24-34fc17f3c649"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.086399 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "20836760-c025-4e65-bf24-34fc17f3c649" (UID: "20836760-c025-4e65-bf24-34fc17f3c649"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.086436 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run" (OuterVolumeSpecName: "var-run") pod "20836760-c025-4e65-bf24-34fc17f3c649" (UID: "20836760-c025-4e65-bf24-34fc17f3c649"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.087020 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "20836760-c025-4e65-bf24-34fc17f3c649" (UID: "20836760-c025-4e65-bf24-34fc17f3c649"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.087303 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-scripts" (OuterVolumeSpecName: "scripts") pod "20836760-c025-4e65-bf24-34fc17f3c649" (UID: "20836760-c025-4e65-bf24-34fc17f3c649"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.091429 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20836760-c025-4e65-bf24-34fc17f3c649-kube-api-access-4t2nc" (OuterVolumeSpecName: "kube-api-access-4t2nc") pod "20836760-c025-4e65-bf24-34fc17f3c649" (UID: "20836760-c025-4e65-bf24-34fc17f3c649"). InnerVolumeSpecName "kube-api-access-4t2nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.132489 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555372-rddbg"] Mar 12 13:32:00 crc kubenswrapper[4778]: E0312 13:32:00.132924 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20836760-c025-4e65-bf24-34fc17f3c649" containerName="ovn-config" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.132948 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="20836760-c025-4e65-bf24-34fc17f3c649" containerName="ovn-config" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.133142 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="20836760-c025-4e65-bf24-34fc17f3c649" containerName="ovn-config" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.133781 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555372-rddbg" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.136666 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.136831 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.137463 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.147598 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555372-rddbg"] Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.187308 4778 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.187656 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.187670 4778 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/20836760-c025-4e65-bf24-34fc17f3c649-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.187686 4778 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.187698 4778 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/20836760-c025-4e65-bf24-34fc17f3c649-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.187709 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t2nc\" (UniqueName: \"kubernetes.io/projected/20836760-c025-4e65-bf24-34fc17f3c649-kube-api-access-4t2nc\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.216429 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4wct6-config-6fwv4"] Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.230390 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4wct6-config-6fwv4"] Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.265395 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20836760-c025-4e65-bf24-34fc17f3c649" path="/var/lib/kubelet/pods/20836760-c025-4e65-bf24-34fc17f3c649/volumes" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.289765 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9zr\" (UniqueName: \"kubernetes.io/projected/c0b7e295-a151-42b0-a8d6-d062d9a42e88-kube-api-access-cl9zr\") pod \"auto-csr-approver-29555372-rddbg\" (UID: \"c0b7e295-a151-42b0-a8d6-d062d9a42e88\") " pod="openshift-infra/auto-csr-approver-29555372-rddbg" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.391920 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9zr\" (UniqueName: \"kubernetes.io/projected/c0b7e295-a151-42b0-a8d6-d062d9a42e88-kube-api-access-cl9zr\") pod \"auto-csr-approver-29555372-rddbg\" (UID: \"c0b7e295-a151-42b0-a8d6-d062d9a42e88\") " pod="openshift-infra/auto-csr-approver-29555372-rddbg" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.429840 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9zr\" (UniqueName: \"kubernetes.io/projected/c0b7e295-a151-42b0-a8d6-d062d9a42e88-kube-api-access-cl9zr\") pod \"auto-csr-approver-29555372-rddbg\" (UID: \"c0b7e295-a151-42b0-a8d6-d062d9a42e88\") " pod="openshift-infra/auto-csr-approver-29555372-rddbg" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.455857 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555372-rddbg" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.578725 4778 scope.go:117] "RemoveContainer" containerID="f28e6c324c83dfc76a63ecc641dd7e634a485b3faa88e8b16a2e55fc0961b3a8" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.578791 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4wct6-config-6fwv4" Mar 12 13:32:00 crc kubenswrapper[4778]: I0312 13:32:00.894753 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555372-rddbg"] Mar 12 13:32:01 crc kubenswrapper[4778]: I0312 13:32:01.591799 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"f68fe853c5f2eff9d19ab2850f709aa98efa3ecb7134c5c1e61c852033409890"} Mar 12 13:32:01 crc kubenswrapper[4778]: I0312 13:32:01.592043 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"f236f31858d7f8903c2548d01c7eb7e051fb9c7e772aea262b1dc82f06888421"} Mar 12 13:32:01 crc kubenswrapper[4778]: I0312 13:32:01.592054 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"4a599b99dd2b156167a96adf804cb32cee43a8a4c720de16059ea78b2fd6e31c"} Mar 12 13:32:01 crc kubenswrapper[4778]: I0312 13:32:01.592065 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"f28d138f1fd7bc3661dfeeb07a61f60a94fa3f6a200c91f560af4ae4c400dd78"} Mar 12 13:32:01 crc kubenswrapper[4778]: I0312 13:32:01.595413 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555372-rddbg" event={"ID":"c0b7e295-a151-42b0-a8d6-d062d9a42e88","Type":"ContainerStarted","Data":"b90b0c8b3d3e98566eca6281f35c9f6e9b84e5323c00deb50365e0df5d3b91e3"} Mar 12 13:32:02 crc kubenswrapper[4778]: I0312 13:32:02.602994 4778 generic.go:334] "Generic (PLEG): container finished" podID="c0b7e295-a151-42b0-a8d6-d062d9a42e88" containerID="83e30e12aea92ff26adeced3b96dea20e98c42e4bd6fda29118e167bf1eeb711" exitCode=0 Mar 12 13:32:02 crc kubenswrapper[4778]: I0312 13:32:02.603050 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555372-rddbg" event={"ID":"c0b7e295-a151-42b0-a8d6-d062d9a42e88","Type":"ContainerDied","Data":"83e30e12aea92ff26adeced3b96dea20e98c42e4bd6fda29118e167bf1eeb711"} Mar 12 13:32:03 crc kubenswrapper[4778]: I0312 13:32:03.626520 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"71c934b93dc81fcb4c6490ad1cb01c2110aadcacc2c2b6331fb1f63216515fae"} Mar 12 13:32:03 crc kubenswrapper[4778]: I0312 13:32:03.626602 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"d537ea6c9c04cfb9e576ff7eb8061b47046893f4ff7b34df3bb0907f800a376a"} Mar 12 13:32:03 crc kubenswrapper[4778]: I0312 13:32:03.626620 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"0b84c79df75a4a2e3a8aebea9c2fd3414a940b4674e32f45e81b08a68fe8d46f"} Mar 12 13:32:03 crc kubenswrapper[4778]: I0312 13:32:03.626633 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"e15d95880cc0b746ab57af968115fecac856e5aedb268b4dae48ba5038796103"} Mar 12 13:32:03 crc kubenswrapper[4778]: I0312 13:32:03.626667 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"908dae9220f9ffa18b60246e05ac363811858baf5deb16715e04c852b4af38f9"} Mar 12 13:32:03 crc kubenswrapper[4778]: I0312 13:32:03.626680 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"a41cbc225bec1ba427df8fdd647922f8cb16681229c182bed96fec2cb17265b5"} Mar 12 13:32:03 crc kubenswrapper[4778]: I0312 13:32:03.933011 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555372-rddbg" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.053702 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl9zr\" (UniqueName: \"kubernetes.io/projected/c0b7e295-a151-42b0-a8d6-d062d9a42e88-kube-api-access-cl9zr\") pod \"c0b7e295-a151-42b0-a8d6-d062d9a42e88\" (UID: \"c0b7e295-a151-42b0-a8d6-d062d9a42e88\") " Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.059895 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b7e295-a151-42b0-a8d6-d062d9a42e88-kube-api-access-cl9zr" (OuterVolumeSpecName: "kube-api-access-cl9zr") pod "c0b7e295-a151-42b0-a8d6-d062d9a42e88" (UID: "c0b7e295-a151-42b0-a8d6-d062d9a42e88"). InnerVolumeSpecName "kube-api-access-cl9zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.155916 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl9zr\" (UniqueName: \"kubernetes.io/projected/c0b7e295-a151-42b0-a8d6-d062d9a42e88-kube-api-access-cl9zr\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.648705 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555372-rddbg" event={"ID":"c0b7e295-a151-42b0-a8d6-d062d9a42e88","Type":"ContainerDied","Data":"b90b0c8b3d3e98566eca6281f35c9f6e9b84e5323c00deb50365e0df5d3b91e3"} Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.648958 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90b0c8b3d3e98566eca6281f35c9f6e9b84e5323c00deb50365e0df5d3b91e3" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.649019 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555372-rddbg" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.658421 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c01f943c-e09c-4727-8cf7-eec58a56b363","Type":"ContainerStarted","Data":"90a3afd022e6450b75eb74b1a883ba0a851eec230b95297ba3e7d6063ef22c6a"} Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.713605 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.351960756 podStartE2EDuration="42.713584985s" podCreationTimestamp="2026-03-12 13:31:22 +0000 UTC" firstStartedPulling="2026-03-12 13:31:57.166338432 +0000 UTC m=+1335.615033838" lastFinishedPulling="2026-03-12 13:32:02.527962671 +0000 UTC m=+1340.976658067" observedRunningTime="2026-03-12 13:32:04.70813717 +0000 UTC m=+1343.156832576" watchObservedRunningTime="2026-03-12 13:32:04.713584985 +0000 UTC m=+1343.162280391" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.985008 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-56bl9"] Mar 12 13:32:04 crc kubenswrapper[4778]: E0312 13:32:04.985426 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b7e295-a151-42b0-a8d6-d062d9a42e88" containerName="oc" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.985446 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b7e295-a151-42b0-a8d6-d062d9a42e88" containerName="oc" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.985636 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b7e295-a151-42b0-a8d6-d062d9a42e88" containerName="oc" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.986715 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:04 crc kubenswrapper[4778]: I0312 13:32:04.988486 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.000326 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-56bl9"] Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.050247 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555366-zt5bk"] Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.073100 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555366-zt5bk"] Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.073488 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-svc\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.073564 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.073643 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-config\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.073667 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.073688 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsg69\" (UniqueName: \"kubernetes.io/projected/811bc15c-050c-4d37-a19f-095086748286-kube-api-access-bsg69\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.073721 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.175084 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.175257 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-config\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.175288 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.175322 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsg69\" (UniqueName: \"kubernetes.io/projected/811bc15c-050c-4d37-a19f-095086748286-kube-api-access-bsg69\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.175370 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.175440 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-svc\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.176225 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.176275 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-config\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.176294 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.176515 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.176559 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-svc\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.195008 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsg69\" (UniqueName: \"kubernetes.io/projected/811bc15c-050c-4d37-a19f-095086748286-kube-api-access-bsg69\") pod \"dnsmasq-dns-764c5664d7-56bl9\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.304702 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:05 crc kubenswrapper[4778]: I0312 13:32:05.780779 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-56bl9"] Mar 12 13:32:05 crc kubenswrapper[4778]: W0312 13:32:05.791237 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod811bc15c_050c_4d37_a19f_095086748286.slice/crio-9e0eacf82432587cd58359c3985b8def0ae32125ba66b4e86532ed5c793bbd04 WatchSource:0}: Error finding container 9e0eacf82432587cd58359c3985b8def0ae32125ba66b4e86532ed5c793bbd04: Status 404 returned error can't find the container with id 9e0eacf82432587cd58359c3985b8def0ae32125ba66b4e86532ed5c793bbd04 Mar 12 13:32:06 crc kubenswrapper[4778]: I0312 13:32:06.039371 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 12 13:32:06 crc kubenswrapper[4778]: I0312 13:32:06.267694 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48c598c-314b-4dc6-af90-7772a2ca7f2d" path="/var/lib/kubelet/pods/d48c598c-314b-4dc6-af90-7772a2ca7f2d/volumes" Mar 12 13:32:06 crc kubenswrapper[4778]: I0312 13:32:06.672761 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 12 13:32:06 crc kubenswrapper[4778]: I0312 13:32:06.675620 4778 generic.go:334] "Generic (PLEG): container finished" podID="811bc15c-050c-4d37-a19f-095086748286" containerID="52a29e484c375a20ac3f8fc8c2aa037eb3038bed507119d164be5bd117815abc" exitCode=0 Mar 12 13:32:06 crc kubenswrapper[4778]: I0312 13:32:06.675667 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" event={"ID":"811bc15c-050c-4d37-a19f-095086748286","Type":"ContainerDied","Data":"52a29e484c375a20ac3f8fc8c2aa037eb3038bed507119d164be5bd117815abc"} Mar 12 13:32:06 crc kubenswrapper[4778]: I0312 13:32:06.675696 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" event={"ID":"811bc15c-050c-4d37-a19f-095086748286","Type":"ContainerStarted","Data":"9e0eacf82432587cd58359c3985b8def0ae32125ba66b4e86532ed5c793bbd04"} Mar 12 13:32:07 crc kubenswrapper[4778]: I0312 13:32:07.688524 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" event={"ID":"811bc15c-050c-4d37-a19f-095086748286","Type":"ContainerStarted","Data":"512c2c0cf187f0ee46cccf1da3f29d083846818126627409ab7b1bb5fa1ef052"} Mar 12 13:32:07 crc kubenswrapper[4778]: I0312 13:32:07.688889 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:07 crc kubenswrapper[4778]: I0312 13:32:07.710602 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" podStartSLOduration=3.7105742 podStartE2EDuration="3.7105742s" podCreationTimestamp="2026-03-12 13:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:07.70740525 +0000 UTC m=+1346.156100656" watchObservedRunningTime="2026-03-12 13:32:07.7105742 +0000 UTC m=+1346.159269606" Mar 12 13:32:07 crc kubenswrapper[4778]: I0312 13:32:07.979402 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-nh9xs"] Mar 12 13:32:07 crc kubenswrapper[4778]: I0312 13:32:07.980613 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:07 crc kubenswrapper[4778]: I0312 13:32:07.993516 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nh9xs"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.134565 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdrjg\" (UniqueName: \"kubernetes.io/projected/3800be73-3a09-42b6-8d01-592ccbc6aaa3-kube-api-access-tdrjg\") pod \"cinder-db-create-nh9xs\" (UID: \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\") " pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.134648 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3800be73-3a09-42b6-8d01-592ccbc6aaa3-operator-scripts\") pod \"cinder-db-create-nh9xs\" (UID: \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\") " pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.192714 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-31ed-account-create-update-h8bhm"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.193939 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.196650 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.205764 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-31ed-account-create-update-h8bhm"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.235960 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdrjg\" (UniqueName: \"kubernetes.io/projected/3800be73-3a09-42b6-8d01-592ccbc6aaa3-kube-api-access-tdrjg\") pod \"cinder-db-create-nh9xs\" (UID: \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\") " pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.236047 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3800be73-3a09-42b6-8d01-592ccbc6aaa3-operator-scripts\") pod \"cinder-db-create-nh9xs\" (UID: \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\") " pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.236766 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3800be73-3a09-42b6-8d01-592ccbc6aaa3-operator-scripts\") pod \"cinder-db-create-nh9xs\" (UID: \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\") " pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.254265 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdrjg\" (UniqueName: \"kubernetes.io/projected/3800be73-3a09-42b6-8d01-592ccbc6aaa3-kube-api-access-tdrjg\") pod \"cinder-db-create-nh9xs\" (UID: \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\") " pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.288776 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-gxsm6"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.293727 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.306981 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gxsm6"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.337132 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsvdw\" (UniqueName: \"kubernetes.io/projected/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-kube-api-access-vsvdw\") pod \"cinder-31ed-account-create-update-h8bhm\" (UID: \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\") " pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.337298 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-operator-scripts\") pod \"cinder-31ed-account-create-update-h8bhm\" (UID: \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\") " pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.340513 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.388813 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2abd-account-create-update-chtfz"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.389887 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.391127 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.408324 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2abd-account-create-update-chtfz"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.438660 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsvdw\" (UniqueName: \"kubernetes.io/projected/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-kube-api-access-vsvdw\") pod \"cinder-31ed-account-create-update-h8bhm\" (UID: \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\") " pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.438750 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnm5n\" (UniqueName: \"kubernetes.io/projected/79ff3988-1976-4049-8277-0acb36da44c5-kube-api-access-rnm5n\") pod \"barbican-db-create-gxsm6\" (UID: \"79ff3988-1976-4049-8277-0acb36da44c5\") " pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.438863 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-operator-scripts\") pod \"cinder-31ed-account-create-update-h8bhm\" (UID: \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\") " pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.438937 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ff3988-1976-4049-8277-0acb36da44c5-operator-scripts\") pod \"barbican-db-create-gxsm6\" (UID: \"79ff3988-1976-4049-8277-0acb36da44c5\") " pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.439783 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-operator-scripts\") pod \"cinder-31ed-account-create-update-h8bhm\" (UID: \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\") " pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.451534 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-57cfm"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.452492 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.471285 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.471704 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.471861 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rjpsk" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.472047 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.487350 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-57cfm"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.496706 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsvdw\" (UniqueName: \"kubernetes.io/projected/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-kube-api-access-vsvdw\") pod \"cinder-31ed-account-create-update-h8bhm\" (UID: \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\") " pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.510930 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.519561 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e25a-account-create-update-vs6zm"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.520525 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.530121 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.540582 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e25a-account-create-update-vs6zm"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.553381 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnm5n\" (UniqueName: \"kubernetes.io/projected/79ff3988-1976-4049-8277-0acb36da44c5-kube-api-access-rnm5n\") pod \"barbican-db-create-gxsm6\" (UID: \"79ff3988-1976-4049-8277-0acb36da44c5\") " pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.553739 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ff3988-1976-4049-8277-0acb36da44c5-operator-scripts\") pod \"barbican-db-create-gxsm6\" (UID: \"79ff3988-1976-4049-8277-0acb36da44c5\") " pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.553839 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-config-data\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.553928 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfc2k\" (UniqueName: \"kubernetes.io/projected/729468a8-fded-4564-96c8-471d3cf48825-kube-api-access-tfc2k\") pod \"neutron-2abd-account-create-update-chtfz\" (UID: \"729468a8-fded-4564-96c8-471d3cf48825\") " pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.554013 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729468a8-fded-4564-96c8-471d3cf48825-operator-scripts\") pod \"neutron-2abd-account-create-update-chtfz\" (UID: \"729468a8-fded-4564-96c8-471d3cf48825\") " pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.554107 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-combined-ca-bundle\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.554176 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlp2n\" (UniqueName: \"kubernetes.io/projected/ec77eae6-4dac-4535-b0d3-98bd3422e4de-kube-api-access-xlp2n\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.554977 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ff3988-1976-4049-8277-0acb36da44c5-operator-scripts\") pod \"barbican-db-create-gxsm6\" (UID: \"79ff3988-1976-4049-8277-0acb36da44c5\") " pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.576966 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnm5n\" (UniqueName: \"kubernetes.io/projected/79ff3988-1976-4049-8277-0acb36da44c5-kube-api-access-rnm5n\") pod \"barbican-db-create-gxsm6\" (UID: \"79ff3988-1976-4049-8277-0acb36da44c5\") " pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.589474 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-thsh7"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.594113 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.610444 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-thsh7"] Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.614121 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.655612 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31f8bb53-a8a8-448f-8f42-349232e383ec-operator-scripts\") pod \"barbican-e25a-account-create-update-vs6zm\" (UID: \"31f8bb53-a8a8-448f-8f42-349232e383ec\") " pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.655742 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-config-data\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.655779 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfc2k\" (UniqueName: \"kubernetes.io/projected/729468a8-fded-4564-96c8-471d3cf48825-kube-api-access-tfc2k\") pod \"neutron-2abd-account-create-update-chtfz\" (UID: \"729468a8-fded-4564-96c8-471d3cf48825\") " pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.655807 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729468a8-fded-4564-96c8-471d3cf48825-operator-scripts\") pod \"neutron-2abd-account-create-update-chtfz\" (UID: \"729468a8-fded-4564-96c8-471d3cf48825\") " pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.655841 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-operator-scripts\") pod \"neutron-db-create-thsh7\" (UID: \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\") " pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.655878 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-combined-ca-bundle\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.655904 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlp2n\" (UniqueName: \"kubernetes.io/projected/ec77eae6-4dac-4535-b0d3-98bd3422e4de-kube-api-access-xlp2n\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.656030 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtxwj\" (UniqueName: \"kubernetes.io/projected/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-kube-api-access-dtxwj\") pod \"neutron-db-create-thsh7\" (UID: \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\") " pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.656061 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbpjb\" (UniqueName: \"kubernetes.io/projected/31f8bb53-a8a8-448f-8f42-349232e383ec-kube-api-access-wbpjb\") pod \"barbican-e25a-account-create-update-vs6zm\" (UID: \"31f8bb53-a8a8-448f-8f42-349232e383ec\") " pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.659505 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729468a8-fded-4564-96c8-471d3cf48825-operator-scripts\") pod \"neutron-2abd-account-create-update-chtfz\" (UID: \"729468a8-fded-4564-96c8-471d3cf48825\") " pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.660279 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-config-data\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.663489 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-combined-ca-bundle\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.678958 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfc2k\" (UniqueName: \"kubernetes.io/projected/729468a8-fded-4564-96c8-471d3cf48825-kube-api-access-tfc2k\") pod \"neutron-2abd-account-create-update-chtfz\" (UID: \"729468a8-fded-4564-96c8-471d3cf48825\") " pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.685568 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlp2n\" (UniqueName: \"kubernetes.io/projected/ec77eae6-4dac-4535-b0d3-98bd3422e4de-kube-api-access-xlp2n\") pod \"keystone-db-sync-57cfm\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.744887 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.758101 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtxwj\" (UniqueName: \"kubernetes.io/projected/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-kube-api-access-dtxwj\") pod \"neutron-db-create-thsh7\" (UID: \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\") " pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.758139 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbpjb\" (UniqueName: \"kubernetes.io/projected/31f8bb53-a8a8-448f-8f42-349232e383ec-kube-api-access-wbpjb\") pod \"barbican-e25a-account-create-update-vs6zm\" (UID: \"31f8bb53-a8a8-448f-8f42-349232e383ec\") " pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.758278 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31f8bb53-a8a8-448f-8f42-349232e383ec-operator-scripts\") pod \"barbican-e25a-account-create-update-vs6zm\" (UID: \"31f8bb53-a8a8-448f-8f42-349232e383ec\") " pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.758341 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-operator-scripts\") pod \"neutron-db-create-thsh7\" (UID: \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\") " pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.759251 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31f8bb53-a8a8-448f-8f42-349232e383ec-operator-scripts\") pod \"barbican-e25a-account-create-update-vs6zm\" (UID: \"31f8bb53-a8a8-448f-8f42-349232e383ec\") " pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.762230 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-operator-scripts\") pod \"neutron-db-create-thsh7\" (UID: \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\") " pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.775170 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbpjb\" (UniqueName: \"kubernetes.io/projected/31f8bb53-a8a8-448f-8f42-349232e383ec-kube-api-access-wbpjb\") pod \"barbican-e25a-account-create-update-vs6zm\" (UID: \"31f8bb53-a8a8-448f-8f42-349232e383ec\") " pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.778742 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtxwj\" (UniqueName: \"kubernetes.io/projected/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-kube-api-access-dtxwj\") pod \"neutron-db-create-thsh7\" (UID: \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\") " pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.815854 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.896435 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.922233 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:08 crc kubenswrapper[4778]: I0312 13:32:08.947856 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nh9xs"] Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.030887 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-31ed-account-create-update-h8bhm"] Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.166428 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-gxsm6"] Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.301474 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2abd-account-create-update-chtfz"] Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.413569 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-57cfm"] Mar 12 13:32:09 crc kubenswrapper[4778]: W0312 13:32:09.417521 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec77eae6_4dac_4535_b0d3_98bd3422e4de.slice/crio-e416653bf42b2d593f800644adfdb9dd57501c2cbfadea4a9a6bd3bc9f20d011 WatchSource:0}: Error finding container e416653bf42b2d593f800644adfdb9dd57501c2cbfadea4a9a6bd3bc9f20d011: Status 404 returned error can't find the container with id e416653bf42b2d593f800644adfdb9dd57501c2cbfadea4a9a6bd3bc9f20d011 Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.504109 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-thsh7"] Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.518830 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e25a-account-create-update-vs6zm"] Mar 12 13:32:09 crc kubenswrapper[4778]: W0312 13:32:09.544018 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31f8bb53_a8a8_448f_8f42_349232e383ec.slice/crio-7af48946578bd61f228568d72f3eb420af6ca8e366fa1043f54b2e44d5cf2462 WatchSource:0}: Error finding container 7af48946578bd61f228568d72f3eb420af6ca8e366fa1043f54b2e44d5cf2462: Status 404 returned error can't find the container with id 7af48946578bd61f228568d72f3eb420af6ca8e366fa1043f54b2e44d5cf2462 Mar 12 13:32:09 crc kubenswrapper[4778]: W0312 13:32:09.544511 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9793dfb5_c2a5_4dc1_993d_9e024a810ce8.slice/crio-f4df7f9c0ba0bb575b3f8143a0a2a4366ed16f5aa6229ee4e07f76139f3bdb21 WatchSource:0}: Error finding container f4df7f9c0ba0bb575b3f8143a0a2a4366ed16f5aa6229ee4e07f76139f3bdb21: Status 404 returned error can't find the container with id f4df7f9c0ba0bb575b3f8143a0a2a4366ed16f5aa6229ee4e07f76139f3bdb21 Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.719202 4778 generic.go:334] "Generic (PLEG): container finished" podID="79ff3988-1976-4049-8277-0acb36da44c5" containerID="93602c5ae72cfd4f9a42c4921524905037c8077ce8260918d72d9601b072dd59" exitCode=0 Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.719327 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gxsm6" event={"ID":"79ff3988-1976-4049-8277-0acb36da44c5","Type":"ContainerDied","Data":"93602c5ae72cfd4f9a42c4921524905037c8077ce8260918d72d9601b072dd59"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.719378 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gxsm6" event={"ID":"79ff3988-1976-4049-8277-0acb36da44c5","Type":"ContainerStarted","Data":"8a4949a97a8077a1a4d6a1981effa75bacbbaf8684ec8ee1397aba5987fcdd14"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.721493 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2abd-account-create-update-chtfz" event={"ID":"729468a8-fded-4564-96c8-471d3cf48825","Type":"ContainerStarted","Data":"a62186594073bc08d5194d8b9ce9a46d1a29b359b5ca56b7c0f8fed38f9c7470"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.721526 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2abd-account-create-update-chtfz" event={"ID":"729468a8-fded-4564-96c8-471d3cf48825","Type":"ContainerStarted","Data":"9dc728b0f4cc0eafb30fa27920b49b62f951fc64e849dafb2097b41e077244e3"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.726661 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-thsh7" event={"ID":"9793dfb5-c2a5-4dc1-993d-9e024a810ce8","Type":"ContainerStarted","Data":"3bf3addaa75cf85838ea1739e9760ca68c0ed5921fd1bd5da9e4725715df9a99"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.726713 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-thsh7" event={"ID":"9793dfb5-c2a5-4dc1-993d-9e024a810ce8","Type":"ContainerStarted","Data":"f4df7f9c0ba0bb575b3f8143a0a2a4366ed16f5aa6229ee4e07f76139f3bdb21"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.728829 4778 generic.go:334] "Generic (PLEG): container finished" podID="3800be73-3a09-42b6-8d01-592ccbc6aaa3" containerID="70fc2c631648b6cf05ce7c564c8a25d897ce94ea350c4d6a8a0ccacb6c5f16b4" exitCode=0 Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.728887 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh9xs" event={"ID":"3800be73-3a09-42b6-8d01-592ccbc6aaa3","Type":"ContainerDied","Data":"70fc2c631648b6cf05ce7c564c8a25d897ce94ea350c4d6a8a0ccacb6c5f16b4"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.728910 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh9xs" event={"ID":"3800be73-3a09-42b6-8d01-592ccbc6aaa3","Type":"ContainerStarted","Data":"95d2ed0897cf2bbe34932a1f54f1dbbea9a78e7b987d5a18fb616fc0888d29bf"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.729971 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-57cfm" event={"ID":"ec77eae6-4dac-4535-b0d3-98bd3422e4de","Type":"ContainerStarted","Data":"e416653bf42b2d593f800644adfdb9dd57501c2cbfadea4a9a6bd3bc9f20d011"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.733171 4778 generic.go:334] "Generic (PLEG): container finished" podID="4b694c81-3b07-45a1-9ca1-1e47e7430f1f" containerID="13ffa46dd0ede6f8f4fd6e787f1d2948d8a5e96a8e47df52e40147817681f0f7" exitCode=0 Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.733233 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-31ed-account-create-update-h8bhm" event={"ID":"4b694c81-3b07-45a1-9ca1-1e47e7430f1f","Type":"ContainerDied","Data":"13ffa46dd0ede6f8f4fd6e787f1d2948d8a5e96a8e47df52e40147817681f0f7"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.733372 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-31ed-account-create-update-h8bhm" event={"ID":"4b694c81-3b07-45a1-9ca1-1e47e7430f1f","Type":"ContainerStarted","Data":"f7445a7703568cb1d2b72115b07ec7b8761dc64a43181a517180c2f025e9efc9"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.734837 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e25a-account-create-update-vs6zm" event={"ID":"31f8bb53-a8a8-448f-8f42-349232e383ec","Type":"ContainerStarted","Data":"7af48946578bd61f228568d72f3eb420af6ca8e366fa1043f54b2e44d5cf2462"} Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.761131 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-thsh7" podStartSLOduration=1.761109894 podStartE2EDuration="1.761109894s" podCreationTimestamp="2026-03-12 13:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:09.752471688 +0000 UTC m=+1348.201167084" watchObservedRunningTime="2026-03-12 13:32:09.761109894 +0000 UTC m=+1348.209805290" Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.777018 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2abd-account-create-update-chtfz" podStartSLOduration=1.777001316 podStartE2EDuration="1.777001316s" podCreationTimestamp="2026-03-12 13:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:09.773487696 +0000 UTC m=+1348.222183092" watchObservedRunningTime="2026-03-12 13:32:09.777001316 +0000 UTC m=+1348.225696702" Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.795354 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-e25a-account-create-update-vs6zm" podStartSLOduration=1.795334947 podStartE2EDuration="1.795334947s" podCreationTimestamp="2026-03-12 13:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:09.789544742 +0000 UTC m=+1348.238240138" watchObservedRunningTime="2026-03-12 13:32:09.795334947 +0000 UTC m=+1348.244030343" Mar 12 13:32:09 crc kubenswrapper[4778]: I0312 13:32:09.860213 4778 scope.go:117] "RemoveContainer" containerID="59816c72d24ee82ad1e212a580fdeb3c8cd671c1f79b421c31d995678ebec873" Mar 12 13:32:10 crc kubenswrapper[4778]: I0312 13:32:10.747855 4778 generic.go:334] "Generic (PLEG): container finished" podID="31f8bb53-a8a8-448f-8f42-349232e383ec" containerID="60b2242b65665faad21e5afc28edb1788f01dc784524abe26ac1b4cb9a5296a5" exitCode=0 Mar 12 13:32:10 crc kubenswrapper[4778]: I0312 13:32:10.747946 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e25a-account-create-update-vs6zm" event={"ID":"31f8bb53-a8a8-448f-8f42-349232e383ec","Type":"ContainerDied","Data":"60b2242b65665faad21e5afc28edb1788f01dc784524abe26ac1b4cb9a5296a5"} Mar 12 13:32:10 crc kubenswrapper[4778]: I0312 13:32:10.749997 4778 generic.go:334] "Generic (PLEG): container finished" podID="729468a8-fded-4564-96c8-471d3cf48825" containerID="a62186594073bc08d5194d8b9ce9a46d1a29b359b5ca56b7c0f8fed38f9c7470" exitCode=0 Mar 12 13:32:10 crc kubenswrapper[4778]: I0312 13:32:10.750096 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2abd-account-create-update-chtfz" event={"ID":"729468a8-fded-4564-96c8-471d3cf48825","Type":"ContainerDied","Data":"a62186594073bc08d5194d8b9ce9a46d1a29b359b5ca56b7c0f8fed38f9c7470"} Mar 12 13:32:10 crc kubenswrapper[4778]: I0312 13:32:10.751781 4778 generic.go:334] "Generic (PLEG): container finished" podID="9793dfb5-c2a5-4dc1-993d-9e024a810ce8" containerID="3bf3addaa75cf85838ea1739e9760ca68c0ed5921fd1bd5da9e4725715df9a99" exitCode=0 Mar 12 13:32:10 crc kubenswrapper[4778]: I0312 13:32:10.751822 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-thsh7" event={"ID":"9793dfb5-c2a5-4dc1-993d-9e024a810ce8","Type":"ContainerDied","Data":"3bf3addaa75cf85838ea1739e9760ca68c0ed5921fd1bd5da9e4725715df9a99"} Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.089589 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.184570 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.187815 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.232252 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdrjg\" (UniqueName: \"kubernetes.io/projected/3800be73-3a09-42b6-8d01-592ccbc6aaa3-kube-api-access-tdrjg\") pod \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\" (UID: \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\") " Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.232385 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3800be73-3a09-42b6-8d01-592ccbc6aaa3-operator-scripts\") pod \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\" (UID: \"3800be73-3a09-42b6-8d01-592ccbc6aaa3\") " Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.233311 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3800be73-3a09-42b6-8d01-592ccbc6aaa3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3800be73-3a09-42b6-8d01-592ccbc6aaa3" (UID: "3800be73-3a09-42b6-8d01-592ccbc6aaa3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.239937 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3800be73-3a09-42b6-8d01-592ccbc6aaa3-kube-api-access-tdrjg" (OuterVolumeSpecName: "kube-api-access-tdrjg") pod "3800be73-3a09-42b6-8d01-592ccbc6aaa3" (UID: "3800be73-3a09-42b6-8d01-592ccbc6aaa3"). InnerVolumeSpecName "kube-api-access-tdrjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.333474 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsvdw\" (UniqueName: \"kubernetes.io/projected/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-kube-api-access-vsvdw\") pod \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\" (UID: \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\") " Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.333544 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnm5n\" (UniqueName: \"kubernetes.io/projected/79ff3988-1976-4049-8277-0acb36da44c5-kube-api-access-rnm5n\") pod \"79ff3988-1976-4049-8277-0acb36da44c5\" (UID: \"79ff3988-1976-4049-8277-0acb36da44c5\") " Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.333570 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ff3988-1976-4049-8277-0acb36da44c5-operator-scripts\") pod \"79ff3988-1976-4049-8277-0acb36da44c5\" (UID: \"79ff3988-1976-4049-8277-0acb36da44c5\") " Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.333618 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-operator-scripts\") pod \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\" (UID: \"4b694c81-3b07-45a1-9ca1-1e47e7430f1f\") " Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.334157 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdrjg\" (UniqueName: \"kubernetes.io/projected/3800be73-3a09-42b6-8d01-592ccbc6aaa3-kube-api-access-tdrjg\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.334173 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3800be73-3a09-42b6-8d01-592ccbc6aaa3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.334619 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b694c81-3b07-45a1-9ca1-1e47e7430f1f" (UID: "4b694c81-3b07-45a1-9ca1-1e47e7430f1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.335003 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ff3988-1976-4049-8277-0acb36da44c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79ff3988-1976-4049-8277-0acb36da44c5" (UID: "79ff3988-1976-4049-8277-0acb36da44c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.338603 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-kube-api-access-vsvdw" (OuterVolumeSpecName: "kube-api-access-vsvdw") pod "4b694c81-3b07-45a1-9ca1-1e47e7430f1f" (UID: "4b694c81-3b07-45a1-9ca1-1e47e7430f1f"). InnerVolumeSpecName "kube-api-access-vsvdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.339122 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ff3988-1976-4049-8277-0acb36da44c5-kube-api-access-rnm5n" (OuterVolumeSpecName: "kube-api-access-rnm5n") pod "79ff3988-1976-4049-8277-0acb36da44c5" (UID: "79ff3988-1976-4049-8277-0acb36da44c5"). InnerVolumeSpecName "kube-api-access-rnm5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.436084 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsvdw\" (UniqueName: \"kubernetes.io/projected/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-kube-api-access-vsvdw\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.436118 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnm5n\" (UniqueName: \"kubernetes.io/projected/79ff3988-1976-4049-8277-0acb36da44c5-kube-api-access-rnm5n\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.436127 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ff3988-1976-4049-8277-0acb36da44c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.436136 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b694c81-3b07-45a1-9ca1-1e47e7430f1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.776501 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nh9xs" event={"ID":"3800be73-3a09-42b6-8d01-592ccbc6aaa3","Type":"ContainerDied","Data":"95d2ed0897cf2bbe34932a1f54f1dbbea9a78e7b987d5a18fb616fc0888d29bf"} Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.776749 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d2ed0897cf2bbe34932a1f54f1dbbea9a78e7b987d5a18fb616fc0888d29bf" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.776805 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nh9xs" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.791872 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-31ed-account-create-update-h8bhm" event={"ID":"4b694c81-3b07-45a1-9ca1-1e47e7430f1f","Type":"ContainerDied","Data":"f7445a7703568cb1d2b72115b07ec7b8761dc64a43181a517180c2f025e9efc9"} Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.791900 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7445a7703568cb1d2b72115b07ec7b8761dc64a43181a517180c2f025e9efc9" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.791960 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-31ed-account-create-update-h8bhm" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.795626 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-gxsm6" event={"ID":"79ff3988-1976-4049-8277-0acb36da44c5","Type":"ContainerDied","Data":"8a4949a97a8077a1a4d6a1981effa75bacbbaf8684ec8ee1397aba5987fcdd14"} Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.795649 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4949a97a8077a1a4d6a1981effa75bacbbaf8684ec8ee1397aba5987fcdd14" Mar 12 13:32:11 crc kubenswrapper[4778]: I0312 13:32:11.795680 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-gxsm6" Mar 12 13:32:13 crc kubenswrapper[4778]: I0312 13:32:13.812531 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-thsh7" event={"ID":"9793dfb5-c2a5-4dc1-993d-9e024a810ce8","Type":"ContainerDied","Data":"f4df7f9c0ba0bb575b3f8143a0a2a4366ed16f5aa6229ee4e07f76139f3bdb21"} Mar 12 13:32:13 crc kubenswrapper[4778]: I0312 13:32:13.812871 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4df7f9c0ba0bb575b3f8143a0a2a4366ed16f5aa6229ee4e07f76139f3bdb21" Mar 12 13:32:13 crc kubenswrapper[4778]: I0312 13:32:13.814084 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e25a-account-create-update-vs6zm" event={"ID":"31f8bb53-a8a8-448f-8f42-349232e383ec","Type":"ContainerDied","Data":"7af48946578bd61f228568d72f3eb420af6ca8e366fa1043f54b2e44d5cf2462"} Mar 12 13:32:13 crc kubenswrapper[4778]: I0312 13:32:13.814122 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af48946578bd61f228568d72f3eb420af6ca8e366fa1043f54b2e44d5cf2462" Mar 12 13:32:13 crc kubenswrapper[4778]: I0312 13:32:13.819912 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2abd-account-create-update-chtfz" event={"ID":"729468a8-fded-4564-96c8-471d3cf48825","Type":"ContainerDied","Data":"9dc728b0f4cc0eafb30fa27920b49b62f951fc64e849dafb2097b41e077244e3"} Mar 12 13:32:13 crc kubenswrapper[4778]: I0312 13:32:13.819960 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc728b0f4cc0eafb30fa27920b49b62f951fc64e849dafb2097b41e077244e3" Mar 12 13:32:13 crc kubenswrapper[4778]: I0312 13:32:13.978537 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.004143 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.016372 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.074556 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfc2k\" (UniqueName: \"kubernetes.io/projected/729468a8-fded-4564-96c8-471d3cf48825-kube-api-access-tfc2k\") pod \"729468a8-fded-4564-96c8-471d3cf48825\" (UID: \"729468a8-fded-4564-96c8-471d3cf48825\") " Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.074659 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-operator-scripts\") pod \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\" (UID: \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\") " Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.074685 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729468a8-fded-4564-96c8-471d3cf48825-operator-scripts\") pod \"729468a8-fded-4564-96c8-471d3cf48825\" (UID: \"729468a8-fded-4564-96c8-471d3cf48825\") " Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.074699 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtxwj\" (UniqueName: \"kubernetes.io/projected/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-kube-api-access-dtxwj\") pod \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\" (UID: \"9793dfb5-c2a5-4dc1-993d-9e024a810ce8\") " Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.076153 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9793dfb5-c2a5-4dc1-993d-9e024a810ce8" (UID: "9793dfb5-c2a5-4dc1-993d-9e024a810ce8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.076648 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729468a8-fded-4564-96c8-471d3cf48825-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "729468a8-fded-4564-96c8-471d3cf48825" (UID: "729468a8-fded-4564-96c8-471d3cf48825"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.079336 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729468a8-fded-4564-96c8-471d3cf48825-kube-api-access-tfc2k" (OuterVolumeSpecName: "kube-api-access-tfc2k") pod "729468a8-fded-4564-96c8-471d3cf48825" (UID: "729468a8-fded-4564-96c8-471d3cf48825"). InnerVolumeSpecName "kube-api-access-tfc2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.079476 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-kube-api-access-dtxwj" (OuterVolumeSpecName: "kube-api-access-dtxwj") pod "9793dfb5-c2a5-4dc1-993d-9e024a810ce8" (UID: "9793dfb5-c2a5-4dc1-993d-9e024a810ce8"). InnerVolumeSpecName "kube-api-access-dtxwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.176346 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbpjb\" (UniqueName: \"kubernetes.io/projected/31f8bb53-a8a8-448f-8f42-349232e383ec-kube-api-access-wbpjb\") pod \"31f8bb53-a8a8-448f-8f42-349232e383ec\" (UID: \"31f8bb53-a8a8-448f-8f42-349232e383ec\") " Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.176432 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31f8bb53-a8a8-448f-8f42-349232e383ec-operator-scripts\") pod \"31f8bb53-a8a8-448f-8f42-349232e383ec\" (UID: \"31f8bb53-a8a8-448f-8f42-349232e383ec\") " Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.176746 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfc2k\" (UniqueName: \"kubernetes.io/projected/729468a8-fded-4564-96c8-471d3cf48825-kube-api-access-tfc2k\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.176763 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.176772 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtxwj\" (UniqueName: \"kubernetes.io/projected/9793dfb5-c2a5-4dc1-993d-9e024a810ce8-kube-api-access-dtxwj\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.176782 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729468a8-fded-4564-96c8-471d3cf48825-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.177015 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31f8bb53-a8a8-448f-8f42-349232e383ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31f8bb53-a8a8-448f-8f42-349232e383ec" (UID: "31f8bb53-a8a8-448f-8f42-349232e383ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.186394 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f8bb53-a8a8-448f-8f42-349232e383ec-kube-api-access-wbpjb" (OuterVolumeSpecName: "kube-api-access-wbpjb") pod "31f8bb53-a8a8-448f-8f42-349232e383ec" (UID: "31f8bb53-a8a8-448f-8f42-349232e383ec"). InnerVolumeSpecName "kube-api-access-wbpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.280460 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbpjb\" (UniqueName: \"kubernetes.io/projected/31f8bb53-a8a8-448f-8f42-349232e383ec-kube-api-access-wbpjb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.280491 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31f8bb53-a8a8-448f-8f42-349232e383ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.829793 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-57cfm" event={"ID":"ec77eae6-4dac-4535-b0d3-98bd3422e4de","Type":"ContainerStarted","Data":"fc1fdc3b0586065e85920687a0b5a3f3a3005e79a719fda2a25493dca50c853e"} Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.831414 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2abd-account-create-update-chtfz" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.831494 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-thsh7" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.831417 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xg6z4" event={"ID":"befeb973-a1de-48f9-8de0-5559f75472dc","Type":"ContainerStarted","Data":"58438369e99b6009fb9ed545548de66fcc857634b3821d960d6e5735646c9d5c"} Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.831792 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e25a-account-create-update-vs6zm" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.849076 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-57cfm" podStartSLOduration=2.447329164 podStartE2EDuration="6.8490285s" podCreationTimestamp="2026-03-12 13:32:08 +0000 UTC" firstStartedPulling="2026-03-12 13:32:09.419702296 +0000 UTC m=+1347.868397692" lastFinishedPulling="2026-03-12 13:32:13.821401622 +0000 UTC m=+1352.270097028" observedRunningTime="2026-03-12 13:32:14.847540358 +0000 UTC m=+1353.296235744" watchObservedRunningTime="2026-03-12 13:32:14.8490285 +0000 UTC m=+1353.297723916" Mar 12 13:32:14 crc kubenswrapper[4778]: I0312 13:32:14.863625 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xg6z4" podStartSLOduration=2.7914096710000003 podStartE2EDuration="35.863602524s" podCreationTimestamp="2026-03-12 13:31:39 +0000 UTC" firstStartedPulling="2026-03-12 13:31:40.739714159 +0000 UTC m=+1319.188409555" lastFinishedPulling="2026-03-12 13:32:13.811907012 +0000 UTC m=+1352.260602408" observedRunningTime="2026-03-12 13:32:14.861220327 +0000 UTC m=+1353.309915753" watchObservedRunningTime="2026-03-12 13:32:14.863602524 +0000 UTC m=+1353.312297950" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.306175 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.371199 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8rkss"] Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.371446 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-8rkss" podUID="5dd405d8-c82b-49d0-a871-1c7c847638df" containerName="dnsmasq-dns" containerID="cri-o://cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb" gracePeriod=10 Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.809581 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.840807 4778 generic.go:334] "Generic (PLEG): container finished" podID="5dd405d8-c82b-49d0-a871-1c7c847638df" containerID="cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb" exitCode=0 Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.841606 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8rkss" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.842020 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8rkss" event={"ID":"5dd405d8-c82b-49d0-a871-1c7c847638df","Type":"ContainerDied","Data":"cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb"} Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.842043 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8rkss" event={"ID":"5dd405d8-c82b-49d0-a871-1c7c847638df","Type":"ContainerDied","Data":"0cf55f4c77e0e83cbfd4fa4c9d04d1940beb400c64b78ffec689c21b7bd18ebf"} Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.842057 4778 scope.go:117] "RemoveContainer" containerID="cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.873345 4778 scope.go:117] "RemoveContainer" containerID="94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.896768 4778 scope.go:117] "RemoveContainer" containerID="cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb" Mar 12 13:32:15 crc kubenswrapper[4778]: E0312 13:32:15.897433 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb\": container with ID starting with cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb not found: ID does not exist" containerID="cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.897486 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb"} err="failed to get container status \"cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb\": rpc error: code = NotFound desc = could not find container \"cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb\": container with ID starting with cea929c8344637e6d5422f933285cd8d16eba93f79935ed8a5b3e6067be52dcb not found: ID does not exist" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.897512 4778 scope.go:117] "RemoveContainer" containerID="94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96" Mar 12 13:32:15 crc kubenswrapper[4778]: E0312 13:32:15.897790 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96\": container with ID starting with 94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96 not found: ID does not exist" containerID="94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.897835 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96"} err="failed to get container status \"94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96\": rpc error: code = NotFound desc = could not find container \"94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96\": container with ID starting with 94ff3282c8f419818bdb2d0b93c2c285da1b67c9dafa1b0134a3349197ba9d96 not found: ID does not exist" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.907424 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-config\") pod \"5dd405d8-c82b-49d0-a871-1c7c847638df\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.907465 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-dns-svc\") pod \"5dd405d8-c82b-49d0-a871-1c7c847638df\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.907583 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-nb\") pod \"5dd405d8-c82b-49d0-a871-1c7c847638df\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.907852 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-sb\") pod \"5dd405d8-c82b-49d0-a871-1c7c847638df\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.907918 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26t5\" (UniqueName: \"kubernetes.io/projected/5dd405d8-c82b-49d0-a871-1c7c847638df-kube-api-access-r26t5\") pod \"5dd405d8-c82b-49d0-a871-1c7c847638df\" (UID: \"5dd405d8-c82b-49d0-a871-1c7c847638df\") " Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.913614 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd405d8-c82b-49d0-a871-1c7c847638df-kube-api-access-r26t5" (OuterVolumeSpecName: "kube-api-access-r26t5") pod "5dd405d8-c82b-49d0-a871-1c7c847638df" (UID: "5dd405d8-c82b-49d0-a871-1c7c847638df"). InnerVolumeSpecName "kube-api-access-r26t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.954896 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-config" (OuterVolumeSpecName: "config") pod "5dd405d8-c82b-49d0-a871-1c7c847638df" (UID: "5dd405d8-c82b-49d0-a871-1c7c847638df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.956970 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5dd405d8-c82b-49d0-a871-1c7c847638df" (UID: "5dd405d8-c82b-49d0-a871-1c7c847638df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.960096 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5dd405d8-c82b-49d0-a871-1c7c847638df" (UID: "5dd405d8-c82b-49d0-a871-1c7c847638df"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:15 crc kubenswrapper[4778]: I0312 13:32:15.971783 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5dd405d8-c82b-49d0-a871-1c7c847638df" (UID: "5dd405d8-c82b-49d0-a871-1c7c847638df"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:16 crc kubenswrapper[4778]: I0312 13:32:16.011016 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:16 crc kubenswrapper[4778]: I0312 13:32:16.011058 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r26t5\" (UniqueName: \"kubernetes.io/projected/5dd405d8-c82b-49d0-a871-1c7c847638df-kube-api-access-r26t5\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:16 crc kubenswrapper[4778]: I0312 13:32:16.011071 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:16 crc kubenswrapper[4778]: I0312 13:32:16.011080 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:16 crc kubenswrapper[4778]: I0312 13:32:16.011088 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5dd405d8-c82b-49d0-a871-1c7c847638df-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:16 crc kubenswrapper[4778]: I0312 13:32:16.175205 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8rkss"] Mar 12 13:32:16 crc kubenswrapper[4778]: I0312 13:32:16.182997 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8rkss"] Mar 12 13:32:16 crc kubenswrapper[4778]: I0312 13:32:16.267493 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd405d8-c82b-49d0-a871-1c7c847638df" path="/var/lib/kubelet/pods/5dd405d8-c82b-49d0-a871-1c7c847638df/volumes" Mar 12 13:32:17 crc kubenswrapper[4778]: I0312 13:32:17.860945 4778 generic.go:334] "Generic (PLEG): container finished" podID="ec77eae6-4dac-4535-b0d3-98bd3422e4de" containerID="fc1fdc3b0586065e85920687a0b5a3f3a3005e79a719fda2a25493dca50c853e" exitCode=0 Mar 12 13:32:17 crc kubenswrapper[4778]: I0312 13:32:17.860992 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-57cfm" event={"ID":"ec77eae6-4dac-4535-b0d3-98bd3422e4de","Type":"ContainerDied","Data":"fc1fdc3b0586065e85920687a0b5a3f3a3005e79a719fda2a25493dca50c853e"} Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.163197 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.261810 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-combined-ca-bundle\") pod \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.262236 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-config-data\") pod \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.262317 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlp2n\" (UniqueName: \"kubernetes.io/projected/ec77eae6-4dac-4535-b0d3-98bd3422e4de-kube-api-access-xlp2n\") pod \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\" (UID: \"ec77eae6-4dac-4535-b0d3-98bd3422e4de\") " Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.268006 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec77eae6-4dac-4535-b0d3-98bd3422e4de-kube-api-access-xlp2n" (OuterVolumeSpecName: "kube-api-access-xlp2n") pod "ec77eae6-4dac-4535-b0d3-98bd3422e4de" (UID: "ec77eae6-4dac-4535-b0d3-98bd3422e4de"). InnerVolumeSpecName "kube-api-access-xlp2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.286516 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec77eae6-4dac-4535-b0d3-98bd3422e4de" (UID: "ec77eae6-4dac-4535-b0d3-98bd3422e4de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.310728 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-config-data" (OuterVolumeSpecName: "config-data") pod "ec77eae6-4dac-4535-b0d3-98bd3422e4de" (UID: "ec77eae6-4dac-4535-b0d3-98bd3422e4de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.363686 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.363727 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec77eae6-4dac-4535-b0d3-98bd3422e4de-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.363738 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlp2n\" (UniqueName: \"kubernetes.io/projected/ec77eae6-4dac-4535-b0d3-98bd3422e4de-kube-api-access-xlp2n\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.880980 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-57cfm" event={"ID":"ec77eae6-4dac-4535-b0d3-98bd3422e4de","Type":"ContainerDied","Data":"e416653bf42b2d593f800644adfdb9dd57501c2cbfadea4a9a6bd3bc9f20d011"} Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.881024 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e416653bf42b2d593f800644adfdb9dd57501c2cbfadea4a9a6bd3bc9f20d011" Mar 12 13:32:19 crc kubenswrapper[4778]: I0312 13:32:19.881059 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-57cfm" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118114 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-f4n2l"] Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118706 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd405d8-c82b-49d0-a871-1c7c847638df" containerName="init" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118718 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd405d8-c82b-49d0-a871-1c7c847638df" containerName="init" Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118733 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec77eae6-4dac-4535-b0d3-98bd3422e4de" containerName="keystone-db-sync" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118739 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec77eae6-4dac-4535-b0d3-98bd3422e4de" containerName="keystone-db-sync" Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118747 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9793dfb5-c2a5-4dc1-993d-9e024a810ce8" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118753 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9793dfb5-c2a5-4dc1-993d-9e024a810ce8" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118764 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b694c81-3b07-45a1-9ca1-1e47e7430f1f" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118769 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b694c81-3b07-45a1-9ca1-1e47e7430f1f" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118783 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729468a8-fded-4564-96c8-471d3cf48825" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118791 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="729468a8-fded-4564-96c8-471d3cf48825" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118806 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ff3988-1976-4049-8277-0acb36da44c5" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118811 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ff3988-1976-4049-8277-0acb36da44c5" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118822 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3800be73-3a09-42b6-8d01-592ccbc6aaa3" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118828 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3800be73-3a09-42b6-8d01-592ccbc6aaa3" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118841 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd405d8-c82b-49d0-a871-1c7c847638df" containerName="dnsmasq-dns" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118846 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd405d8-c82b-49d0-a871-1c7c847638df" containerName="dnsmasq-dns" Mar 12 13:32:20 crc kubenswrapper[4778]: E0312 13:32:20.118859 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f8bb53-a8a8-448f-8f42-349232e383ec" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.118866 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f8bb53-a8a8-448f-8f42-349232e383ec" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.119029 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b694c81-3b07-45a1-9ca1-1e47e7430f1f" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.119041 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f8bb53-a8a8-448f-8f42-349232e383ec" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.119052 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ff3988-1976-4049-8277-0acb36da44c5" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.119059 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd405d8-c82b-49d0-a871-1c7c847638df" containerName="dnsmasq-dns" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.119069 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9793dfb5-c2a5-4dc1-993d-9e024a810ce8" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.119081 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3800be73-3a09-42b6-8d01-592ccbc6aaa3" containerName="mariadb-database-create" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.119089 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="729468a8-fded-4564-96c8-471d3cf48825" containerName="mariadb-account-create-update" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.119098 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec77eae6-4dac-4535-b0d3-98bd3422e4de" containerName="keystone-db-sync" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.120251 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.142933 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-f4n2l"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.191686 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vhhp2"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.192973 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.195768 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.195957 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.196073 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rjpsk" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.196273 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.200484 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.222482 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vhhp2"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276697 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljj6s\" (UniqueName: \"kubernetes.io/projected/57227510-d79a-4924-941f-fdc35bda5d41-kube-api-access-ljj6s\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276742 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276777 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276803 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-scripts\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276837 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-credential-keys\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276885 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276911 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-combined-ca-bundle\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276944 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-svc\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276971 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-config\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.276996 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-fernet-keys\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.277019 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-config-data\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.277051 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhsf\" (UniqueName: \"kubernetes.io/projected/5b157abf-4269-4449-8522-ac31cfbafd7e-kube-api-access-dxhsf\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.370277 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-d5pl9"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.371503 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.377842 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.377971 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5pxn8" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.378068 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379503 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljj6s\" (UniqueName: \"kubernetes.io/projected/57227510-d79a-4924-941f-fdc35bda5d41-kube-api-access-ljj6s\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379539 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379564 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379586 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-scripts\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379610 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-credential-keys\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379659 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379689 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-combined-ca-bundle\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379716 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-svc\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379738 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-config\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379759 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-fernet-keys\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379777 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-config-data\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.379814 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxhsf\" (UniqueName: \"kubernetes.io/projected/5b157abf-4269-4449-8522-ac31cfbafd7e-kube-api-access-dxhsf\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.380964 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.381343 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.381994 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.386119 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-svc\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.387457 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-config\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.393024 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-combined-ca-bundle\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.405444 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-scripts\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.409504 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-fernet-keys\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.410011 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-config-data\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.410052 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d5pl9"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.426732 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-credential-keys\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.468649 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.482779 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxhsf\" (UniqueName: \"kubernetes.io/projected/5b157abf-4269-4449-8522-ac31cfbafd7e-kube-api-access-dxhsf\") pod \"dnsmasq-dns-5959f8865f-f4n2l\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.499320 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-config-data\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.499688 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-combined-ca-bundle\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.499729 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-db-sync-config-data\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.499816 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-scripts\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.499897 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpfhh\" (UniqueName: \"kubernetes.io/projected/bb110a1e-6281-437d-b857-eb79c4953e1a-kube-api-access-jpfhh\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.499958 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb110a1e-6281-437d-b857-eb79c4953e1a-etc-machine-id\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.526901 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljj6s\" (UniqueName: \"kubernetes.io/projected/57227510-d79a-4924-941f-fdc35bda5d41-kube-api-access-ljj6s\") pod \"keystone-bootstrap-vhhp2\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.533753 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.554692 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.589306 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.597610 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.622142 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpfhh\" (UniqueName: \"kubernetes.io/projected/bb110a1e-6281-437d-b857-eb79c4953e1a-kube-api-access-jpfhh\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.622232 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb110a1e-6281-437d-b857-eb79c4953e1a-etc-machine-id\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.622305 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-config-data\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.622343 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-combined-ca-bundle\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.622363 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-db-sync-config-data\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.622397 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-scripts\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.628827 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-combined-ca-bundle\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.629902 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb110a1e-6281-437d-b857-eb79c4953e1a-etc-machine-id\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.630840 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-db-sync-config-data\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.639203 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-scripts\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.644572 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-config-data\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.647348 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6cvgs"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.648478 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.655732 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.656448 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.661462 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d7pv5" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.669593 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.682343 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6cvgs"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.687167 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpfhh\" (UniqueName: \"kubernetes.io/projected/bb110a1e-6281-437d-b857-eb79c4953e1a-kube-api-access-jpfhh\") pod \"cinder-db-sync-d5pl9\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.692081 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-f4n2l"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.692885 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.700100 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-c75fp"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.703397 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.714080 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-p59s9"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.721264 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.723712 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-c75fp"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.725481 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-85xbx" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.725890 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.728974 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.729069 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-log-httpd\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.729111 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62wfc\" (UniqueName: \"kubernetes.io/projected/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-kube-api-access-62wfc\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.729143 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-config-data\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.729260 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-scripts\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.729350 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.729420 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-run-httpd\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.752202 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-p59s9"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.769471 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-zr86r"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.770997 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.774005 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.774070 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-72bvj" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.774196 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.780912 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zr86r"] Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831048 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-combined-ca-bundle\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831102 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831145 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz588\" (UniqueName: \"kubernetes.io/projected/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-kube-api-access-sz588\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831170 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-scripts\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831225 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-combined-ca-bundle\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831250 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831266 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-db-sync-config-data\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831286 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831317 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-run-httpd\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831335 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-config\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831357 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831373 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831401 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-config\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831424 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-log-httpd\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831443 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgk6\" (UniqueName: \"kubernetes.io/projected/76f8f940-670d-47a0-a90a-afd3aa37a726-kube-api-access-qdgk6\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831463 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62wfc\" (UniqueName: \"kubernetes.io/projected/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-kube-api-access-62wfc\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831481 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hmn\" (UniqueName: \"kubernetes.io/projected/a682334f-73c0-4e38-8f95-e5de661319bb-kube-api-access-r5hmn\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831500 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-config-data\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.831519 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.835635 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.837934 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-run-httpd\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.838231 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-log-httpd\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.842039 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.842797 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-config-data\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.850286 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.854020 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-scripts\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.862119 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62wfc\" (UniqueName: \"kubernetes.io/projected/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-kube-api-access-62wfc\") pod \"ceilometer-0\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935289 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-config\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935329 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935359 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-config\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935389 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgk6\" (UniqueName: \"kubernetes.io/projected/76f8f940-670d-47a0-a90a-afd3aa37a726-kube-api-access-qdgk6\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935419 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5hmn\" (UniqueName: \"kubernetes.io/projected/a682334f-73c0-4e38-8f95-e5de661319bb-kube-api-access-r5hmn\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935440 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935459 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-combined-ca-bundle\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935488 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txchs\" (UniqueName: \"kubernetes.io/projected/faeb9cb3-46ae-428f-8c0e-538a2e552072-kube-api-access-txchs\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935525 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935540 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-config-data\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935567 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-combined-ca-bundle\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935592 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz588\" (UniqueName: \"kubernetes.io/projected/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-kube-api-access-sz588\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935620 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faeb9cb3-46ae-428f-8c0e-538a2e552072-logs\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935638 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-scripts\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935661 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-combined-ca-bundle\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935684 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-db-sync-config-data\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.935704 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.936650 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.937246 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-config\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.937731 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.940786 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-config\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.941546 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.941558 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.946219 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-db-sync-config-data\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.954126 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-combined-ca-bundle\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.957564 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz588\" (UniqueName: \"kubernetes.io/projected/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-kube-api-access-sz588\") pod \"dnsmasq-dns-58dd9ff6bc-c75fp\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.960425 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgk6\" (UniqueName: \"kubernetes.io/projected/76f8f940-670d-47a0-a90a-afd3aa37a726-kube-api-access-qdgk6\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.960765 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5hmn\" (UniqueName: \"kubernetes.io/projected/a682334f-73c0-4e38-8f95-e5de661319bb-kube-api-access-r5hmn\") pod \"barbican-db-sync-p59s9\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.961227 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-combined-ca-bundle\") pod \"neutron-db-sync-6cvgs\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.980473 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:32:20 crc kubenswrapper[4778]: I0312 13:32:20.989970 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.037697 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txchs\" (UniqueName: \"kubernetes.io/projected/faeb9cb3-46ae-428f-8c0e-538a2e552072-kube-api-access-txchs\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.037740 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-config-data\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.037772 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-combined-ca-bundle\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.037808 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faeb9cb3-46ae-428f-8c0e-538a2e552072-logs\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.037825 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-scripts\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.039256 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faeb9cb3-46ae-428f-8c0e-538a2e552072-logs\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.042415 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-combined-ca-bundle\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.042458 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-scripts\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.047268 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-config-data\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.055224 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txchs\" (UniqueName: \"kubernetes.io/projected/faeb9cb3-46ae-428f-8c0e-538a2e552072-kube-api-access-txchs\") pod \"placement-db-sync-zr86r\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.064318 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.075563 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p59s9" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.088366 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.132614 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vhhp2"] Mar 12 13:32:21 crc kubenswrapper[4778]: W0312 13:32:21.187544 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57227510_d79a_4924_941f_fdc35bda5d41.slice/crio-b47ac6e700f6b26e79b8d33a0cd1c034fedfe7bd0e82190cbdb182f9948a680c WatchSource:0}: Error finding container b47ac6e700f6b26e79b8d33a0cd1c034fedfe7bd0e82190cbdb182f9948a680c: Status 404 returned error can't find the container with id b47ac6e700f6b26e79b8d33a0cd1c034fedfe7bd0e82190cbdb182f9948a680c Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.231799 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-f4n2l"] Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.562230 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d5pl9"] Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.598450 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6cvgs"] Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.647400 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.871765 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-c75fp"] Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.902757 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zr86r"] Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.908092 4778 generic.go:334] "Generic (PLEG): container finished" podID="5b157abf-4269-4449-8522-ac31cfbafd7e" containerID="57d636adb9d7ba52ab49822ca13b194593b944e05c770194c95b1d65f93a9998" exitCode=0 Mar 12 13:32:21 crc kubenswrapper[4778]: W0312 13:32:21.908297 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaeb9cb3_46ae_428f_8c0e_538a2e552072.slice/crio-d453594d6992bec0b731b36d1124f474724ec877404950823baad33e6f3bbe34 WatchSource:0}: Error finding container d453594d6992bec0b731b36d1124f474724ec877404950823baad33e6f3bbe34: Status 404 returned error can't find the container with id d453594d6992bec0b731b36d1124f474724ec877404950823baad33e6f3bbe34 Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.908323 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" event={"ID":"5b157abf-4269-4449-8522-ac31cfbafd7e","Type":"ContainerDied","Data":"57d636adb9d7ba52ab49822ca13b194593b944e05c770194c95b1d65f93a9998"} Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.908356 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" event={"ID":"5b157abf-4269-4449-8522-ac31cfbafd7e","Type":"ContainerStarted","Data":"2c8d5e8aae459cdcec524b4a37114dc4cbca1d6f2aa00764bfd89e2df611f32c"} Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.909504 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d5pl9" event={"ID":"bb110a1e-6281-437d-b857-eb79c4953e1a","Type":"ContainerStarted","Data":"8d37cd44357eb35c5c4917c8593f7e9902991ee071e5d92e025804bd35c2f76e"} Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.919671 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-p59s9"] Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.921536 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vhhp2" event={"ID":"57227510-d79a-4924-941f-fdc35bda5d41","Type":"ContainerStarted","Data":"59b401343563918013d35a2531aae9f420a7a4077e0d31999372fd3e7e21e169"} Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.921591 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vhhp2" event={"ID":"57227510-d79a-4924-941f-fdc35bda5d41","Type":"ContainerStarted","Data":"b47ac6e700f6b26e79b8d33a0cd1c034fedfe7bd0e82190cbdb182f9948a680c"} Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.923915 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cvgs" event={"ID":"76f8f940-670d-47a0-a90a-afd3aa37a726","Type":"ContainerStarted","Data":"856cfa1709bfc70905fa0560b8bcd9ee96d30c9ac3ff33d52f1608bcf34cd2fc"} Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.931438 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerStarted","Data":"a7e5a5f0fc47985a7306f104e3261cd746e20017382e7ac550b97742b3f6f6e4"} Mar 12 13:32:21 crc kubenswrapper[4778]: I0312 13:32:21.987866 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vhhp2" podStartSLOduration=1.987843971 podStartE2EDuration="1.987843971s" podCreationTimestamp="2026-03-12 13:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:21.964950892 +0000 UTC m=+1360.413646308" watchObservedRunningTime="2026-03-12 13:32:21.987843971 +0000 UTC m=+1360.436539367" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.243858 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.303757 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.371574 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-swift-storage-0\") pod \"5b157abf-4269-4449-8522-ac31cfbafd7e\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.371654 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxhsf\" (UniqueName: \"kubernetes.io/projected/5b157abf-4269-4449-8522-ac31cfbafd7e-kube-api-access-dxhsf\") pod \"5b157abf-4269-4449-8522-ac31cfbafd7e\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.371701 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-sb\") pod \"5b157abf-4269-4449-8522-ac31cfbafd7e\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.371728 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-nb\") pod \"5b157abf-4269-4449-8522-ac31cfbafd7e\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.371841 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-config\") pod \"5b157abf-4269-4449-8522-ac31cfbafd7e\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.371879 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-svc\") pod \"5b157abf-4269-4449-8522-ac31cfbafd7e\" (UID: \"5b157abf-4269-4449-8522-ac31cfbafd7e\") " Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.397454 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b157abf-4269-4449-8522-ac31cfbafd7e-kube-api-access-dxhsf" (OuterVolumeSpecName: "kube-api-access-dxhsf") pod "5b157abf-4269-4449-8522-ac31cfbafd7e" (UID: "5b157abf-4269-4449-8522-ac31cfbafd7e"). InnerVolumeSpecName "kube-api-access-dxhsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.399060 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b157abf-4269-4449-8522-ac31cfbafd7e" (UID: "5b157abf-4269-4449-8522-ac31cfbafd7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.421867 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5b157abf-4269-4449-8522-ac31cfbafd7e" (UID: "5b157abf-4269-4449-8522-ac31cfbafd7e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.422206 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5b157abf-4269-4449-8522-ac31cfbafd7e" (UID: "5b157abf-4269-4449-8522-ac31cfbafd7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.454018 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b157abf-4269-4449-8522-ac31cfbafd7e" (UID: "5b157abf-4269-4449-8522-ac31cfbafd7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.456818 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-config" (OuterVolumeSpecName: "config") pod "5b157abf-4269-4449-8522-ac31cfbafd7e" (UID: "5b157abf-4269-4449-8522-ac31cfbafd7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.474424 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.474464 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.474475 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.474488 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxhsf\" (UniqueName: \"kubernetes.io/projected/5b157abf-4269-4449-8522-ac31cfbafd7e-kube-api-access-dxhsf\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.474497 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.474505 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b157abf-4269-4449-8522-ac31cfbafd7e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.945877 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zr86r" event={"ID":"faeb9cb3-46ae-428f-8c0e-538a2e552072","Type":"ContainerStarted","Data":"d453594d6992bec0b731b36d1124f474724ec877404950823baad33e6f3bbe34"} Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.948944 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" event={"ID":"5b157abf-4269-4449-8522-ac31cfbafd7e","Type":"ContainerDied","Data":"2c8d5e8aae459cdcec524b4a37114dc4cbca1d6f2aa00764bfd89e2df611f32c"} Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.948986 4778 scope.go:117] "RemoveContainer" containerID="57d636adb9d7ba52ab49822ca13b194593b944e05c770194c95b1d65f93a9998" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.949121 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-f4n2l" Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.953945 4778 generic.go:334] "Generic (PLEG): container finished" podID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerID="deae0dcafd4096182fbb59df47a0f37084a6a4dd40f9ceb191771d7cc1d9e536" exitCode=0 Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.954008 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" event={"ID":"ed693bb0-f387-42e2-ae31-9ce01aee1cf9","Type":"ContainerDied","Data":"deae0dcafd4096182fbb59df47a0f37084a6a4dd40f9ceb191771d7cc1d9e536"} Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.954029 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" event={"ID":"ed693bb0-f387-42e2-ae31-9ce01aee1cf9","Type":"ContainerStarted","Data":"b878edd8e2f4ab4fa1fd5db083761fef998e508c23a74d8e3cd95838ec67e23c"} Mar 12 13:32:22 crc kubenswrapper[4778]: I0312 13:32:22.975786 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cvgs" event={"ID":"76f8f940-670d-47a0-a90a-afd3aa37a726","Type":"ContainerStarted","Data":"86b41f2ea1c3794ed3e1fc975ecb18420f64bbd7611743de1aa319532e575758"} Mar 12 13:32:23 crc kubenswrapper[4778]: I0312 13:32:23.005929 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6cvgs" podStartSLOduration=3.00591102 podStartE2EDuration="3.00591102s" podCreationTimestamp="2026-03-12 13:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:23.002585945 +0000 UTC m=+1361.451281341" watchObservedRunningTime="2026-03-12 13:32:23.00591102 +0000 UTC m=+1361.454606416" Mar 12 13:32:23 crc kubenswrapper[4778]: I0312 13:32:23.010432 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p59s9" event={"ID":"a682334f-73c0-4e38-8f95-e5de661319bb","Type":"ContainerStarted","Data":"9752a8239a23597303e4c0af125d25d5be143749ecb830c3912a0cbc8277763f"} Mar 12 13:32:23 crc kubenswrapper[4778]: I0312 13:32:23.148397 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-f4n2l"] Mar 12 13:32:23 crc kubenswrapper[4778]: I0312 13:32:23.157422 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-f4n2l"] Mar 12 13:32:24 crc kubenswrapper[4778]: I0312 13:32:24.024529 4778 generic.go:334] "Generic (PLEG): container finished" podID="befeb973-a1de-48f9-8de0-5559f75472dc" containerID="58438369e99b6009fb9ed545548de66fcc857634b3821d960d6e5735646c9d5c" exitCode=0 Mar 12 13:32:24 crc kubenswrapper[4778]: I0312 13:32:24.024877 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xg6z4" event={"ID":"befeb973-a1de-48f9-8de0-5559f75472dc","Type":"ContainerDied","Data":"58438369e99b6009fb9ed545548de66fcc857634b3821d960d6e5735646c9d5c"} Mar 12 13:32:24 crc kubenswrapper[4778]: I0312 13:32:24.034634 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" event={"ID":"ed693bb0-f387-42e2-ae31-9ce01aee1cf9","Type":"ContainerStarted","Data":"a86f0b8f75025d6f637f5995ea2db5120ec912d396c9c98099631c4e389118ac"} Mar 12 13:32:24 crc kubenswrapper[4778]: I0312 13:32:24.089094 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" podStartSLOduration=4.089071375 podStartE2EDuration="4.089071375s" podCreationTimestamp="2026-03-12 13:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:24.067873873 +0000 UTC m=+1362.516569279" watchObservedRunningTime="2026-03-12 13:32:24.089071375 +0000 UTC m=+1362.537766771" Mar 12 13:32:24 crc kubenswrapper[4778]: I0312 13:32:24.275087 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b157abf-4269-4449-8522-ac31cfbafd7e" path="/var/lib/kubelet/pods/5b157abf-4269-4449-8522-ac31cfbafd7e/volumes" Mar 12 13:32:25 crc kubenswrapper[4778]: I0312 13:32:25.042529 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:26 crc kubenswrapper[4778]: I0312 13:32:26.051531 4778 generic.go:334] "Generic (PLEG): container finished" podID="57227510-d79a-4924-941f-fdc35bda5d41" containerID="59b401343563918013d35a2531aae9f420a7a4077e0d31999372fd3e7e21e169" exitCode=0 Mar 12 13:32:26 crc kubenswrapper[4778]: I0312 13:32:26.051623 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vhhp2" event={"ID":"57227510-d79a-4924-941f-fdc35bda5d41","Type":"ContainerDied","Data":"59b401343563918013d35a2531aae9f420a7a4077e0d31999372fd3e7e21e169"} Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.701976 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xg6z4" Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.833707 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crgvn\" (UniqueName: \"kubernetes.io/projected/befeb973-a1de-48f9-8de0-5559f75472dc-kube-api-access-crgvn\") pod \"befeb973-a1de-48f9-8de0-5559f75472dc\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.833809 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-config-data\") pod \"befeb973-a1de-48f9-8de0-5559f75472dc\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.833950 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-db-sync-config-data\") pod \"befeb973-a1de-48f9-8de0-5559f75472dc\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.834052 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-combined-ca-bundle\") pod \"befeb973-a1de-48f9-8de0-5559f75472dc\" (UID: \"befeb973-a1de-48f9-8de0-5559f75472dc\") " Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.840260 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "befeb973-a1de-48f9-8de0-5559f75472dc" (UID: "befeb973-a1de-48f9-8de0-5559f75472dc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.847433 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befeb973-a1de-48f9-8de0-5559f75472dc-kube-api-access-crgvn" (OuterVolumeSpecName: "kube-api-access-crgvn") pod "befeb973-a1de-48f9-8de0-5559f75472dc" (UID: "befeb973-a1de-48f9-8de0-5559f75472dc"). InnerVolumeSpecName "kube-api-access-crgvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.859691 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "befeb973-a1de-48f9-8de0-5559f75472dc" (UID: "befeb973-a1de-48f9-8de0-5559f75472dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.880843 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-config-data" (OuterVolumeSpecName: "config-data") pod "befeb973-a1de-48f9-8de0-5559f75472dc" (UID: "befeb973-a1de-48f9-8de0-5559f75472dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.936351 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.936383 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crgvn\" (UniqueName: \"kubernetes.io/projected/befeb973-a1de-48f9-8de0-5559f75472dc-kube-api-access-crgvn\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.936397 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:27 crc kubenswrapper[4778]: I0312 13:32:27.936406 4778 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/befeb973-a1de-48f9-8de0-5559f75472dc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:28 crc kubenswrapper[4778]: I0312 13:32:28.067027 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xg6z4" event={"ID":"befeb973-a1de-48f9-8de0-5559f75472dc","Type":"ContainerDied","Data":"f4635ea2bc5d2d0cce58645ef33f0143795167ef564ca2829fbc3740cec61b52"} Mar 12 13:32:28 crc kubenswrapper[4778]: I0312 13:32:28.067073 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4635ea2bc5d2d0cce58645ef33f0143795167ef564ca2829fbc3740cec61b52" Mar 12 13:32:28 crc kubenswrapper[4778]: I0312 13:32:28.067142 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xg6z4" Mar 12 13:32:28 crc kubenswrapper[4778]: I0312 13:32:28.557234 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:32:28 crc kubenswrapper[4778]: I0312 13:32:28.557592 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.676597 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-c75fp"] Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.676897 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="dnsmasq-dns" containerID="cri-o://a86f0b8f75025d6f637f5995ea2db5120ec912d396c9c98099631c4e389118ac" gracePeriod=10 Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.684769 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.703481 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v2vtk"] Mar 12 13:32:29 crc kubenswrapper[4778]: E0312 13:32:29.703840 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befeb973-a1de-48f9-8de0-5559f75472dc" containerName="glance-db-sync" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.703853 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="befeb973-a1de-48f9-8de0-5559f75472dc" containerName="glance-db-sync" Mar 12 13:32:29 crc kubenswrapper[4778]: E0312 13:32:29.703867 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b157abf-4269-4449-8522-ac31cfbafd7e" containerName="init" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.703874 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b157abf-4269-4449-8522-ac31cfbafd7e" containerName="init" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.704028 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b157abf-4269-4449-8522-ac31cfbafd7e" containerName="init" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.704041 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="befeb973-a1de-48f9-8de0-5559f75472dc" containerName="glance-db-sync" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.704866 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.721139 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v2vtk"] Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.744991 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.745068 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.745157 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.745256 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.745278 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7vfh\" (UniqueName: \"kubernetes.io/projected/39bd75fd-958e-4b3b-abd5-860adf376fd7-kube-api-access-p7vfh\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.745326 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-config\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.847025 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.847110 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.847136 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7vfh\" (UniqueName: \"kubernetes.io/projected/39bd75fd-958e-4b3b-abd5-860adf376fd7-kube-api-access-p7vfh\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.847184 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-config\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.847264 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.847308 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.848185 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.848691 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.849266 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.850075 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-config\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.854827 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:29 crc kubenswrapper[4778]: I0312 13:32:29.882467 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7vfh\" (UniqueName: \"kubernetes.io/projected/39bd75fd-958e-4b3b-abd5-860adf376fd7-kube-api-access-p7vfh\") pod \"dnsmasq-dns-785d8bcb8c-v2vtk\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.025594 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.582019 4778 generic.go:334] "Generic (PLEG): container finished" podID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerID="a86f0b8f75025d6f637f5995ea2db5120ec912d396c9c98099631c4e389118ac" exitCode=0 Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.582060 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" event={"ID":"ed693bb0-f387-42e2-ae31-9ce01aee1cf9","Type":"ContainerDied","Data":"a86f0b8f75025d6f637f5995ea2db5120ec912d396c9c98099631c4e389118ac"} Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.621881 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.623552 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.626034 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l7l5j" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.626156 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.626463 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.631107 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.764892 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5xwn\" (UniqueName: \"kubernetes.io/projected/b533a505-eb7b-43a3-b95d-60cdc7198066-kube-api-access-f5xwn\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.765278 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.765304 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-scripts\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.765342 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-config-data\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.765394 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.765417 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-logs\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:30 crc kubenswrapper[4778]: I0312 13:32:30.765447 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.127533 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.134046 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.134117 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-scripts\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.136890 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-config-data\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.137035 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.137092 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-logs\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.137146 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.137226 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5xwn\" (UniqueName: \"kubernetes.io/projected/b533a505-eb7b-43a3-b95d-60cdc7198066-kube-api-access-f5xwn\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.139284 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-logs\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.139775 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.140656 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") device mount path \"/mnt/openstack/pv17\"" pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.142043 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-scripts\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.144092 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-config-data\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.156673 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.160654 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5xwn\" (UniqueName: \"kubernetes.io/projected/b533a505-eb7b-43a3-b95d-60cdc7198066-kube-api-access-f5xwn\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.162148 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.172767 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.179749 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.180071 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.211724 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.244413 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.340391 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.340464 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.340493 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.340514 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.340530 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.340575 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.341366 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbwgx\" (UniqueName: \"kubernetes.io/projected/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-kube-api-access-zbwgx\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.443148 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.443241 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbwgx\" (UniqueName: \"kubernetes.io/projected/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-kube-api-access-zbwgx\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.443310 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.443351 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.443376 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.443397 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.443416 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.443511 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.444243 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.444329 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.450385 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.450989 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.452503 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.467898 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbwgx\" (UniqueName: \"kubernetes.io/projected/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-kube-api-access-zbwgx\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.479933 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:31 crc kubenswrapper[4778]: I0312 13:32:31.591291 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:32:33 crc kubenswrapper[4778]: I0312 13:32:33.305132 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:33 crc kubenswrapper[4778]: I0312 13:32:33.424631 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:36 crc kubenswrapper[4778]: I0312 13:32:36.065310 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Mar 12 13:32:37 crc kubenswrapper[4778]: E0312 13:32:37.251618 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Mar 12 13:32:37 crc kubenswrapper[4778]: E0312 13:32:37.252128 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5hmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-p59s9_openstack(a682334f-73c0-4e38-8f95-e5de661319bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:32:37 crc kubenswrapper[4778]: E0312 13:32:37.253425 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-p59s9" podUID="a682334f-73c0-4e38-8f95-e5de661319bb" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.369419 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.470308 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-fernet-keys\") pod \"57227510-d79a-4924-941f-fdc35bda5d41\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.470668 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-scripts\") pod \"57227510-d79a-4924-941f-fdc35bda5d41\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.470707 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljj6s\" (UniqueName: \"kubernetes.io/projected/57227510-d79a-4924-941f-fdc35bda5d41-kube-api-access-ljj6s\") pod \"57227510-d79a-4924-941f-fdc35bda5d41\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.470795 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-combined-ca-bundle\") pod \"57227510-d79a-4924-941f-fdc35bda5d41\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.470858 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-config-data\") pod \"57227510-d79a-4924-941f-fdc35bda5d41\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.470899 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-credential-keys\") pod \"57227510-d79a-4924-941f-fdc35bda5d41\" (UID: \"57227510-d79a-4924-941f-fdc35bda5d41\") " Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.476439 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "57227510-d79a-4924-941f-fdc35bda5d41" (UID: "57227510-d79a-4924-941f-fdc35bda5d41"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.477454 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-scripts" (OuterVolumeSpecName: "scripts") pod "57227510-d79a-4924-941f-fdc35bda5d41" (UID: "57227510-d79a-4924-941f-fdc35bda5d41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.478616 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "57227510-d79a-4924-941f-fdc35bda5d41" (UID: "57227510-d79a-4924-941f-fdc35bda5d41"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.499053 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57227510-d79a-4924-941f-fdc35bda5d41-kube-api-access-ljj6s" (OuterVolumeSpecName: "kube-api-access-ljj6s") pod "57227510-d79a-4924-941f-fdc35bda5d41" (UID: "57227510-d79a-4924-941f-fdc35bda5d41"). InnerVolumeSpecName "kube-api-access-ljj6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.506977 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-config-data" (OuterVolumeSpecName: "config-data") pod "57227510-d79a-4924-941f-fdc35bda5d41" (UID: "57227510-d79a-4924-941f-fdc35bda5d41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.515336 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57227510-d79a-4924-941f-fdc35bda5d41" (UID: "57227510-d79a-4924-941f-fdc35bda5d41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.573160 4778 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.573214 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.573225 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljj6s\" (UniqueName: \"kubernetes.io/projected/57227510-d79a-4924-941f-fdc35bda5d41-kube-api-access-ljj6s\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.573236 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.573245 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.573253 4778 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57227510-d79a-4924-941f-fdc35bda5d41-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.641283 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vhhp2" Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.641279 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vhhp2" event={"ID":"57227510-d79a-4924-941f-fdc35bda5d41","Type":"ContainerDied","Data":"b47ac6e700f6b26e79b8d33a0cd1c034fedfe7bd0e82190cbdb182f9948a680c"} Mar 12 13:32:37 crc kubenswrapper[4778]: I0312 13:32:37.641329 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b47ac6e700f6b26e79b8d33a0cd1c034fedfe7bd0e82190cbdb182f9948a680c" Mar 12 13:32:37 crc kubenswrapper[4778]: E0312 13:32:37.646538 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-p59s9" podUID="a682334f-73c0-4e38-8f95-e5de661319bb" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.460461 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vhhp2"] Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.468414 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vhhp2"] Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.565017 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-56sfj"] Mar 12 13:32:38 crc kubenswrapper[4778]: E0312 13:32:38.565416 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57227510-d79a-4924-941f-fdc35bda5d41" containerName="keystone-bootstrap" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.565436 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="57227510-d79a-4924-941f-fdc35bda5d41" containerName="keystone-bootstrap" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.565636 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="57227510-d79a-4924-941f-fdc35bda5d41" containerName="keystone-bootstrap" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.566130 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.568000 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.568204 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.568540 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rjpsk" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.568696 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.572654 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.583033 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-56sfj"] Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.694958 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-credential-keys\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.695098 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-fernet-keys\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.695209 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94tqw\" (UniqueName: \"kubernetes.io/projected/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-kube-api-access-94tqw\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.695254 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-combined-ca-bundle\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.695311 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-config-data\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.695358 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-scripts\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.797021 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-fernet-keys\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.797102 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94tqw\" (UniqueName: \"kubernetes.io/projected/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-kube-api-access-94tqw\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.797137 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-combined-ca-bundle\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.797171 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-config-data\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.797259 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-scripts\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.797293 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-credential-keys\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.802756 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-credential-keys\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.802797 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-fernet-keys\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.803757 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-combined-ca-bundle\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.811829 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-config-data\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.812422 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-scripts\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.815902 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94tqw\" (UniqueName: \"kubernetes.io/projected/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-kube-api-access-94tqw\") pod \"keystone-bootstrap-56sfj\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:38 crc kubenswrapper[4778]: I0312 13:32:38.894880 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:40 crc kubenswrapper[4778]: I0312 13:32:40.276633 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57227510-d79a-4924-941f-fdc35bda5d41" path="/var/lib/kubelet/pods/57227510-d79a-4924-941f-fdc35bda5d41/volumes" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.120179 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.227976 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-nb\") pod \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.228072 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz588\" (UniqueName: \"kubernetes.io/projected/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-kube-api-access-sz588\") pod \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.228145 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-swift-storage-0\") pod \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.228241 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-config\") pod \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.228337 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-svc\") pod \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.228379 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-sb\") pod \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\" (UID: \"ed693bb0-f387-42e2-ae31-9ce01aee1cf9\") " Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.233924 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-kube-api-access-sz588" (OuterVolumeSpecName: "kube-api-access-sz588") pod "ed693bb0-f387-42e2-ae31-9ce01aee1cf9" (UID: "ed693bb0-f387-42e2-ae31-9ce01aee1cf9"). InnerVolumeSpecName "kube-api-access-sz588". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.280032 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-config" (OuterVolumeSpecName: "config") pod "ed693bb0-f387-42e2-ae31-9ce01aee1cf9" (UID: "ed693bb0-f387-42e2-ae31-9ce01aee1cf9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.282889 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed693bb0-f387-42e2-ae31-9ce01aee1cf9" (UID: "ed693bb0-f387-42e2-ae31-9ce01aee1cf9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.283034 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed693bb0-f387-42e2-ae31-9ce01aee1cf9" (UID: "ed693bb0-f387-42e2-ae31-9ce01aee1cf9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.290096 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed693bb0-f387-42e2-ae31-9ce01aee1cf9" (UID: "ed693bb0-f387-42e2-ae31-9ce01aee1cf9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.293908 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed693bb0-f387-42e2-ae31-9ce01aee1cf9" (UID: "ed693bb0-f387-42e2-ae31-9ce01aee1cf9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.331707 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.332194 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.332225 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.332324 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz588\" (UniqueName: \"kubernetes.io/projected/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-kube-api-access-sz588\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.332344 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.332354 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed693bb0-f387-42e2-ae31-9ce01aee1cf9-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.727207 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" event={"ID":"ed693bb0-f387-42e2-ae31-9ce01aee1cf9","Type":"ContainerDied","Data":"b878edd8e2f4ab4fa1fd5db083761fef998e508c23a74d8e3cd95838ec67e23c"} Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.727285 4778 scope.go:117] "RemoveContainer" containerID="a86f0b8f75025d6f637f5995ea2db5120ec912d396c9c98099631c4e389118ac" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.727519 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.775597 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-c75fp"] Mar 12 13:32:45 crc kubenswrapper[4778]: I0312 13:32:45.785253 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-c75fp"] Mar 12 13:32:46 crc kubenswrapper[4778]: I0312 13:32:46.066176 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-c75fp" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Mar 12 13:32:46 crc kubenswrapper[4778]: I0312 13:32:46.266851 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" path="/var/lib/kubelet/pods/ed693bb0-f387-42e2-ae31-9ce01aee1cf9/volumes" Mar 12 13:32:46 crc kubenswrapper[4778]: E0312 13:32:46.558343 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 12 13:32:46 crc kubenswrapper[4778]: E0312 13:32:46.558816 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpfhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d5pl9_openstack(bb110a1e-6281-437d-b857-eb79c4953e1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 13:32:46 crc kubenswrapper[4778]: E0312 13:32:46.559957 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-d5pl9" podUID="bb110a1e-6281-437d-b857-eb79c4953e1a" Mar 12 13:32:46 crc kubenswrapper[4778]: I0312 13:32:46.584432 4778 scope.go:117] "RemoveContainer" containerID="deae0dcafd4096182fbb59df47a0f37084a6a4dd40f9ceb191771d7cc1d9e536" Mar 12 13:32:46 crc kubenswrapper[4778]: E0312 13:32:46.758337 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-d5pl9" podUID="bb110a1e-6281-437d-b857-eb79c4953e1a" Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.070401 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-56sfj"] Mar 12 13:32:47 crc kubenswrapper[4778]: W0312 13:32:47.087146 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1af573ef_51c3_4bfc_8de6_eb1be8b75c76.slice/crio-ebc59f76f06ba10050abce1212b94660f06fec69cccf436b21f8ae1838a2520b WatchSource:0}: Error finding container ebc59f76f06ba10050abce1212b94660f06fec69cccf436b21f8ae1838a2520b: Status 404 returned error can't find the container with id ebc59f76f06ba10050abce1212b94660f06fec69cccf436b21f8ae1838a2520b Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.092940 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.152279 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v2vtk"] Mar 12 13:32:47 crc kubenswrapper[4778]: W0312 13:32:47.155963 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39bd75fd_958e_4b3b_abd5_860adf376fd7.slice/crio-415522e7cc2372bb11dfe09957497d4a3efac5b28086b59aebe2586918e3f99d WatchSource:0}: Error finding container 415522e7cc2372bb11dfe09957497d4a3efac5b28086b59aebe2586918e3f99d: Status 404 returned error can't find the container with id 415522e7cc2372bb11dfe09957497d4a3efac5b28086b59aebe2586918e3f99d Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.177661 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:47 crc kubenswrapper[4778]: W0312 13:32:47.179471 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa998ea4_f50d_4441_b6ad_b160a19ea4a9.slice/crio-3c8eaac29f690ea90c0a8b68198ea45e6be61ad552e67a0c3c11ec5342477745 WatchSource:0}: Error finding container 3c8eaac29f690ea90c0a8b68198ea45e6be61ad552e67a0c3c11ec5342477745: Status 404 returned error can't find the container with id 3c8eaac29f690ea90c0a8b68198ea45e6be61ad552e67a0c3c11ec5342477745 Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.298607 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.802612 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-56sfj" event={"ID":"1af573ef-51c3-4bfc-8de6-eb1be8b75c76","Type":"ContainerStarted","Data":"710035f2fd1c6ce07427dd61579057ea7d418eb1c9532e9c2ad2d414dc76cbb9"} Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.802845 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-56sfj" event={"ID":"1af573ef-51c3-4bfc-8de6-eb1be8b75c76","Type":"ContainerStarted","Data":"ebc59f76f06ba10050abce1212b94660f06fec69cccf436b21f8ae1838a2520b"} Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.805212 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa998ea4-f50d-4441-b6ad-b160a19ea4a9","Type":"ContainerStarted","Data":"3c8eaac29f690ea90c0a8b68198ea45e6be61ad552e67a0c3c11ec5342477745"} Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.820643 4778 generic.go:334] "Generic (PLEG): container finished" podID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerID="7c88372c4eebf35fa3a0e19eba355c02e9d34ad468328fc457e997e453d917f3" exitCode=0 Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.820730 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" event={"ID":"39bd75fd-958e-4b3b-abd5-860adf376fd7","Type":"ContainerDied","Data":"7c88372c4eebf35fa3a0e19eba355c02e9d34ad468328fc457e997e453d917f3"} Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.820757 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" event={"ID":"39bd75fd-958e-4b3b-abd5-860adf376fd7","Type":"ContainerStarted","Data":"415522e7cc2372bb11dfe09957497d4a3efac5b28086b59aebe2586918e3f99d"} Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.823615 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zr86r" event={"ID":"faeb9cb3-46ae-428f-8c0e-538a2e552072","Type":"ContainerStarted","Data":"434f9dbc426c8bc5145f54de2b34c16cd91006660bd978fe7ad9311fc8579e69"} Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.827946 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerStarted","Data":"a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8"} Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.839631 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b533a505-eb7b-43a3-b95d-60cdc7198066","Type":"ContainerStarted","Data":"bc3d154f21afd30a55fef211dca4fe535b5cb4020d2ea7a7c87c369860d2b039"} Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.872842 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-56sfj" podStartSLOduration=9.87282081 podStartE2EDuration="9.87282081s" podCreationTimestamp="2026-03-12 13:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:47.838256159 +0000 UTC m=+1386.286951555" watchObservedRunningTime="2026-03-12 13:32:47.87282081 +0000 UTC m=+1386.321516206" Mar 12 13:32:47 crc kubenswrapper[4778]: I0312 13:32:47.888404 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-zr86r" podStartSLOduration=3.276943037 podStartE2EDuration="27.888389831s" podCreationTimestamp="2026-03-12 13:32:20 +0000 UTC" firstStartedPulling="2026-03-12 13:32:21.910840787 +0000 UTC m=+1360.359536183" lastFinishedPulling="2026-03-12 13:32:46.522287541 +0000 UTC m=+1384.970982977" observedRunningTime="2026-03-12 13:32:47.887942189 +0000 UTC m=+1386.336637585" watchObservedRunningTime="2026-03-12 13:32:47.888389831 +0000 UTC m=+1386.337085227" Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.852764 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa998ea4-f50d-4441-b6ad-b160a19ea4a9","Type":"ContainerStarted","Data":"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a"} Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.853356 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa998ea4-f50d-4441-b6ad-b160a19ea4a9","Type":"ContainerStarted","Data":"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513"} Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.853147 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerName="glance-httpd" containerID="cri-o://9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a" gracePeriod=30 Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.852949 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerName="glance-log" containerID="cri-o://34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513" gracePeriod=30 Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.858700 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" event={"ID":"39bd75fd-958e-4b3b-abd5-860adf376fd7","Type":"ContainerStarted","Data":"cc6fc61a82e88c3140b3629f45196f98ee08d5f2fdb0df9b40fe66806a0ccbfd"} Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.859997 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.862207 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b533a505-eb7b-43a3-b95d-60cdc7198066","Type":"ContainerStarted","Data":"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34"} Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.862243 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b533a505-eb7b-43a3-b95d-60cdc7198066","Type":"ContainerStarted","Data":"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d"} Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.862490 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerName="glance-log" containerID="cri-o://d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d" gracePeriod=30 Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.862512 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerName="glance-httpd" containerID="cri-o://d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34" gracePeriod=30 Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.892681 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=19.892654858 podStartE2EDuration="19.892654858s" podCreationTimestamp="2026-03-12 13:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:48.880462753 +0000 UTC m=+1387.329158179" watchObservedRunningTime="2026-03-12 13:32:48.892654858 +0000 UTC m=+1387.341350264" Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.924125 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.92410444 podStartE2EDuration="19.92410444s" podCreationTimestamp="2026-03-12 13:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:48.921893238 +0000 UTC m=+1387.370588644" watchObservedRunningTime="2026-03-12 13:32:48.92410444 +0000 UTC m=+1387.372799836" Mar 12 13:32:48 crc kubenswrapper[4778]: I0312 13:32:48.950516 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" podStartSLOduration=19.950494119 podStartE2EDuration="19.950494119s" podCreationTimestamp="2026-03-12 13:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:48.941801932 +0000 UTC m=+1387.390497328" watchObservedRunningTime="2026-03-12 13:32:48.950494119 +0000 UTC m=+1387.399189515" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.615884 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.722500 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"b533a505-eb7b-43a3-b95d-60cdc7198066\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.722668 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-httpd-run\") pod \"b533a505-eb7b-43a3-b95d-60cdc7198066\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.722726 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5xwn\" (UniqueName: \"kubernetes.io/projected/b533a505-eb7b-43a3-b95d-60cdc7198066-kube-api-access-f5xwn\") pod \"b533a505-eb7b-43a3-b95d-60cdc7198066\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.723259 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-config-data\") pod \"b533a505-eb7b-43a3-b95d-60cdc7198066\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.723307 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-logs\") pod \"b533a505-eb7b-43a3-b95d-60cdc7198066\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.723304 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b533a505-eb7b-43a3-b95d-60cdc7198066" (UID: "b533a505-eb7b-43a3-b95d-60cdc7198066"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.723333 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-scripts\") pod \"b533a505-eb7b-43a3-b95d-60cdc7198066\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.723373 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-combined-ca-bundle\") pod \"b533a505-eb7b-43a3-b95d-60cdc7198066\" (UID: \"b533a505-eb7b-43a3-b95d-60cdc7198066\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.723502 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-logs" (OuterVolumeSpecName: "logs") pod "b533a505-eb7b-43a3-b95d-60cdc7198066" (UID: "b533a505-eb7b-43a3-b95d-60cdc7198066"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.723919 4778 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.723945 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533a505-eb7b-43a3-b95d-60cdc7198066-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.732358 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "b533a505-eb7b-43a3-b95d-60cdc7198066" (UID: "b533a505-eb7b-43a3-b95d-60cdc7198066"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.732362 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-scripts" (OuterVolumeSpecName: "scripts") pod "b533a505-eb7b-43a3-b95d-60cdc7198066" (UID: "b533a505-eb7b-43a3-b95d-60cdc7198066"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.734264 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b533a505-eb7b-43a3-b95d-60cdc7198066-kube-api-access-f5xwn" (OuterVolumeSpecName: "kube-api-access-f5xwn") pod "b533a505-eb7b-43a3-b95d-60cdc7198066" (UID: "b533a505-eb7b-43a3-b95d-60cdc7198066"). InnerVolumeSpecName "kube-api-access-f5xwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.751532 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b533a505-eb7b-43a3-b95d-60cdc7198066" (UID: "b533a505-eb7b-43a3-b95d-60cdc7198066"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.789854 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-config-data" (OuterVolumeSpecName: "config-data") pod "b533a505-eb7b-43a3-b95d-60cdc7198066" (UID: "b533a505-eb7b-43a3-b95d-60cdc7198066"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.794451 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.825511 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5xwn\" (UniqueName: \"kubernetes.io/projected/b533a505-eb7b-43a3-b95d-60cdc7198066-kube-api-access-f5xwn\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.825546 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.825555 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.825565 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533a505-eb7b-43a3-b95d-60cdc7198066-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.825590 4778 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.841537 4778 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.871374 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerStarted","Data":"f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.873650 4778 generic.go:334] "Generic (PLEG): container finished" podID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerID="d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34" exitCode=143 Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.873678 4778 generic.go:334] "Generic (PLEG): container finished" podID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerID="d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d" exitCode=143 Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.873719 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b533a505-eb7b-43a3-b95d-60cdc7198066","Type":"ContainerDied","Data":"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.873734 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.873753 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b533a505-eb7b-43a3-b95d-60cdc7198066","Type":"ContainerDied","Data":"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.873766 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b533a505-eb7b-43a3-b95d-60cdc7198066","Type":"ContainerDied","Data":"bc3d154f21afd30a55fef211dca4fe535b5cb4020d2ea7a7c87c369860d2b039"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.873772 4778 scope.go:117] "RemoveContainer" containerID="d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.875411 4778 generic.go:334] "Generic (PLEG): container finished" podID="76f8f940-670d-47a0-a90a-afd3aa37a726" containerID="86b41f2ea1c3794ed3e1fc975ecb18420f64bbd7611743de1aa319532e575758" exitCode=0 Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.875459 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cvgs" event={"ID":"76f8f940-670d-47a0-a90a-afd3aa37a726","Type":"ContainerDied","Data":"86b41f2ea1c3794ed3e1fc975ecb18420f64bbd7611743de1aa319532e575758"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.879502 4778 generic.go:334] "Generic (PLEG): container finished" podID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerID="9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a" exitCode=0 Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.879652 4778 generic.go:334] "Generic (PLEG): container finished" podID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerID="34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513" exitCode=143 Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.879569 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa998ea4-f50d-4441-b6ad-b160a19ea4a9","Type":"ContainerDied","Data":"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.879625 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.879711 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa998ea4-f50d-4441-b6ad-b160a19ea4a9","Type":"ContainerDied","Data":"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.879923 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa998ea4-f50d-4441-b6ad-b160a19ea4a9","Type":"ContainerDied","Data":"3c8eaac29f690ea90c0a8b68198ea45e6be61ad552e67a0c3c11ec5342477745"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.881505 4778 generic.go:334] "Generic (PLEG): container finished" podID="faeb9cb3-46ae-428f-8c0e-538a2e552072" containerID="434f9dbc426c8bc5145f54de2b34c16cd91006660bd978fe7ad9311fc8579e69" exitCode=0 Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.882693 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zr86r" event={"ID":"faeb9cb3-46ae-428f-8c0e-538a2e552072","Type":"ContainerDied","Data":"434f9dbc426c8bc5145f54de2b34c16cd91006660bd978fe7ad9311fc8579e69"} Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.906944 4778 scope.go:117] "RemoveContainer" containerID="d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.926150 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-config-data\") pod \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.926256 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-httpd-run\") pod \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.926302 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-logs\") pod \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.926346 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbwgx\" (UniqueName: \"kubernetes.io/projected/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-kube-api-access-zbwgx\") pod \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.926373 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-scripts\") pod \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.926405 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.926453 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-combined-ca-bundle\") pod \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\" (UID: \"aa998ea4-f50d-4441-b6ad-b160a19ea4a9\") " Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.927486 4778 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.929047 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aa998ea4-f50d-4441-b6ad-b160a19ea4a9" (UID: "aa998ea4-f50d-4441-b6ad-b160a19ea4a9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.929270 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-logs" (OuterVolumeSpecName: "logs") pod "aa998ea4-f50d-4441-b6ad-b160a19ea4a9" (UID: "aa998ea4-f50d-4441-b6ad-b160a19ea4a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.932507 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-kube-api-access-zbwgx" (OuterVolumeSpecName: "kube-api-access-zbwgx") pod "aa998ea4-f50d-4441-b6ad-b160a19ea4a9" (UID: "aa998ea4-f50d-4441-b6ad-b160a19ea4a9"). InnerVolumeSpecName "kube-api-access-zbwgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.932518 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-scripts" (OuterVolumeSpecName: "scripts") pod "aa998ea4-f50d-4441-b6ad-b160a19ea4a9" (UID: "aa998ea4-f50d-4441-b6ad-b160a19ea4a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.933463 4778 scope.go:117] "RemoveContainer" containerID="d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34" Mar 12 13:32:49 crc kubenswrapper[4778]: E0312 13:32:49.934007 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34\": container with ID starting with d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34 not found: ID does not exist" containerID="d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.934042 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34"} err="failed to get container status \"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34\": rpc error: code = NotFound desc = could not find container \"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34\": container with ID starting with d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34 not found: ID does not exist" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.934069 4778 scope.go:117] "RemoveContainer" containerID="d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.934413 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:49 crc kubenswrapper[4778]: E0312 13:32:49.934485 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d\": container with ID starting with d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d not found: ID does not exist" containerID="d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.934525 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d"} err="failed to get container status \"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d\": rpc error: code = NotFound desc = could not find container \"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d\": container with ID starting with d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d not found: ID does not exist" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.934542 4778 scope.go:117] "RemoveContainer" containerID="d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.934817 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34"} err="failed to get container status \"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34\": rpc error: code = NotFound desc = could not find container \"d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34\": container with ID starting with d0d48a7ad8664426390a968f02d1600a2fa102d2c72f9c248494c7e0624b1b34 not found: ID does not exist" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.934841 4778 scope.go:117] "RemoveContainer" containerID="d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.935118 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d"} err="failed to get container status \"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d\": rpc error: code = NotFound desc = could not find container \"d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d\": container with ID starting with d4d7c6b9984a293eb0854447833f3f9cb59361882500cfb456fd7b159e0fba3d not found: ID does not exist" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.935138 4778 scope.go:117] "RemoveContainer" containerID="9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.941744 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "aa998ea4-f50d-4441-b6ad-b160a19ea4a9" (UID: "aa998ea4-f50d-4441-b6ad-b160a19ea4a9"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.949510 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.967248 4778 scope.go:117] "RemoveContainer" containerID="34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.968354 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:49 crc kubenswrapper[4778]: E0312 13:32:49.968806 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerName="glance-httpd" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.968826 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerName="glance-httpd" Mar 12 13:32:49 crc kubenswrapper[4778]: E0312 13:32:49.969365 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="init" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969378 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="init" Mar 12 13:32:49 crc kubenswrapper[4778]: E0312 13:32:49.969403 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerName="glance-log" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969410 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerName="glance-log" Mar 12 13:32:49 crc kubenswrapper[4778]: E0312 13:32:49.969421 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerName="glance-log" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969428 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerName="glance-log" Mar 12 13:32:49 crc kubenswrapper[4778]: E0312 13:32:49.969463 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="dnsmasq-dns" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969471 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="dnsmasq-dns" Mar 12 13:32:49 crc kubenswrapper[4778]: E0312 13:32:49.969478 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerName="glance-httpd" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969484 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerName="glance-httpd" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969799 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerName="glance-log" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969829 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed693bb0-f387-42e2-ae31-9ce01aee1cf9" containerName="dnsmasq-dns" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969847 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerName="glance-log" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969857 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" containerName="glance-httpd" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.969866 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" containerName="glance-httpd" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.970808 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.975885 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa998ea4-f50d-4441-b6ad-b160a19ea4a9" (UID: "aa998ea4-f50d-4441-b6ad-b160a19ea4a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.979377 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.984351 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:49 crc kubenswrapper[4778]: I0312 13:32:49.986913 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.002650 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-config-data" (OuterVolumeSpecName: "config-data") pod "aa998ea4-f50d-4441-b6ad-b160a19ea4a9" (UID: "aa998ea4-f50d-4441-b6ad-b160a19ea4a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.016966 4778 scope.go:117] "RemoveContainer" containerID="9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a" Mar 12 13:32:50 crc kubenswrapper[4778]: E0312 13:32:50.017474 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a\": container with ID starting with 9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a not found: ID does not exist" containerID="9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.017553 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a"} err="failed to get container status \"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a\": rpc error: code = NotFound desc = could not find container \"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a\": container with ID starting with 9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a not found: ID does not exist" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.017586 4778 scope.go:117] "RemoveContainer" containerID="34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513" Mar 12 13:32:50 crc kubenswrapper[4778]: E0312 13:32:50.017900 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513\": container with ID starting with 34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513 not found: ID does not exist" containerID="34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.017924 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513"} err="failed to get container status \"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513\": rpc error: code = NotFound desc = could not find container \"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513\": container with ID starting with 34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513 not found: ID does not exist" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.017938 4778 scope.go:117] "RemoveContainer" containerID="9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.018278 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a"} err="failed to get container status \"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a\": rpc error: code = NotFound desc = could not find container \"9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a\": container with ID starting with 9f2e7f69bdd6233212da78ad84aba66a12c2daa431d1cfce5136f9137ccffb2a not found: ID does not exist" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.018296 4778 scope.go:117] "RemoveContainer" containerID="34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.018506 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513"} err="failed to get container status \"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513\": rpc error: code = NotFound desc = could not find container \"34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513\": container with ID starting with 34daa0a884944841a846bd99aab0d1c3b6985cc49de2820be6feb9d362176513 not found: ID does not exist" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.029064 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.029090 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.029099 4778 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.029106 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.029115 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbwgx\" (UniqueName: \"kubernetes.io/projected/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-kube-api-access-zbwgx\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.029123 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa998ea4-f50d-4441-b6ad-b160a19ea4a9-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.029144 4778 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.052209 4778 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.130807 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.130887 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.130917 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.130937 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pklz\" (UniqueName: \"kubernetes.io/projected/ac92f5c5-e457-4915-a919-0dbe3df23ce8-kube-api-access-5pklz\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.130973 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.131028 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.131107 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.131204 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-logs\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.131321 4778 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.229493 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.234876 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.234939 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.234964 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.234985 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pklz\" (UniqueName: \"kubernetes.io/projected/ac92f5c5-e457-4915-a919-0dbe3df23ce8-kube-api-access-5pklz\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.235008 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.235031 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.235052 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.235099 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-logs\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.235655 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-logs\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.236424 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") device mount path \"/mnt/openstack/pv17\"" pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.237163 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.240986 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.249282 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.251923 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.252854 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.256690 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pklz\" (UniqueName: \"kubernetes.io/projected/ac92f5c5-e457-4915-a919-0dbe3df23ce8-kube-api-access-5pklz\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.290556 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b533a505-eb7b-43a3-b95d-60cdc7198066" path="/var/lib/kubelet/pods/b533a505-eb7b-43a3-b95d-60cdc7198066/volumes" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.304319 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.307496 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.311661 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.317533 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.320180 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.320409 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.326432 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.440906 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwzz\" (UniqueName: \"kubernetes.io/projected/c2b13038-d271-48f5-bd28-a38e2b9dff02-kube-api-access-xfwzz\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.441066 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.441152 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.441224 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.441457 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.441498 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.441543 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.441596 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.543604 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwzz\" (UniqueName: \"kubernetes.io/projected/c2b13038-d271-48f5-bd28-a38e2b9dff02-kube-api-access-xfwzz\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.543676 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.543695 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.543715 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.543759 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.543782 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.543805 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.543828 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.544329 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.545579 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.552860 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.554561 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.554943 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.557082 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.557801 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.561628 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwzz\" (UniqueName: \"kubernetes.io/projected/c2b13038-d271-48f5-bd28-a38e2b9dff02-kube-api-access-xfwzz\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.575226 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.601618 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.640232 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.899707 4778 generic.go:334] "Generic (PLEG): container finished" podID="1af573ef-51c3-4bfc-8de6-eb1be8b75c76" containerID="710035f2fd1c6ce07427dd61579057ea7d418eb1c9532e9c2ad2d414dc76cbb9" exitCode=0 Mar 12 13:32:50 crc kubenswrapper[4778]: I0312 13:32:50.900118 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-56sfj" event={"ID":"1af573ef-51c3-4bfc-8de6-eb1be8b75c76","Type":"ContainerDied","Data":"710035f2fd1c6ce07427dd61579057ea7d418eb1c9532e9c2ad2d414dc76cbb9"} Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.130038 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:32:51 crc kubenswrapper[4778]: W0312 13:32:51.153247 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac92f5c5_e457_4915_a919_0dbe3df23ce8.slice/crio-de7bb235534c3c0c1a6530e35fd6d03d222f02129ca88b49fda3a8c136ab05b7 WatchSource:0}: Error finding container de7bb235534c3c0c1a6530e35fd6d03d222f02129ca88b49fda3a8c136ab05b7: Status 404 returned error can't find the container with id de7bb235534c3c0c1a6530e35fd6d03d222f02129ca88b49fda3a8c136ab05b7 Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.239983 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.277933 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.348745 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.374264 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-combined-ca-bundle\") pod \"76f8f940-670d-47a0-a90a-afd3aa37a726\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.374348 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-config\") pod \"76f8f940-670d-47a0-a90a-afd3aa37a726\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.374445 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-combined-ca-bundle\") pod \"faeb9cb3-46ae-428f-8c0e-538a2e552072\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.374500 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faeb9cb3-46ae-428f-8c0e-538a2e552072-logs\") pod \"faeb9cb3-46ae-428f-8c0e-538a2e552072\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.374579 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-scripts\") pod \"faeb9cb3-46ae-428f-8c0e-538a2e552072\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.374627 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdgk6\" (UniqueName: \"kubernetes.io/projected/76f8f940-670d-47a0-a90a-afd3aa37a726-kube-api-access-qdgk6\") pod \"76f8f940-670d-47a0-a90a-afd3aa37a726\" (UID: \"76f8f940-670d-47a0-a90a-afd3aa37a726\") " Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.374654 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txchs\" (UniqueName: \"kubernetes.io/projected/faeb9cb3-46ae-428f-8c0e-538a2e552072-kube-api-access-txchs\") pod \"faeb9cb3-46ae-428f-8c0e-538a2e552072\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.374731 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-config-data\") pod \"faeb9cb3-46ae-428f-8c0e-538a2e552072\" (UID: \"faeb9cb3-46ae-428f-8c0e-538a2e552072\") " Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.377242 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faeb9cb3-46ae-428f-8c0e-538a2e552072-logs" (OuterVolumeSpecName: "logs") pod "faeb9cb3-46ae-428f-8c0e-538a2e552072" (UID: "faeb9cb3-46ae-428f-8c0e-538a2e552072"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.381465 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f8f940-670d-47a0-a90a-afd3aa37a726-kube-api-access-qdgk6" (OuterVolumeSpecName: "kube-api-access-qdgk6") pod "76f8f940-670d-47a0-a90a-afd3aa37a726" (UID: "76f8f940-670d-47a0-a90a-afd3aa37a726"). InnerVolumeSpecName "kube-api-access-qdgk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.387483 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faeb9cb3-46ae-428f-8c0e-538a2e552072-kube-api-access-txchs" (OuterVolumeSpecName: "kube-api-access-txchs") pod "faeb9cb3-46ae-428f-8c0e-538a2e552072" (UID: "faeb9cb3-46ae-428f-8c0e-538a2e552072"). InnerVolumeSpecName "kube-api-access-txchs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.389541 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-scripts" (OuterVolumeSpecName: "scripts") pod "faeb9cb3-46ae-428f-8c0e-538a2e552072" (UID: "faeb9cb3-46ae-428f-8c0e-538a2e552072"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.404277 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faeb9cb3-46ae-428f-8c0e-538a2e552072" (UID: "faeb9cb3-46ae-428f-8c0e-538a2e552072"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.405682 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-config" (OuterVolumeSpecName: "config") pod "76f8f940-670d-47a0-a90a-afd3aa37a726" (UID: "76f8f940-670d-47a0-a90a-afd3aa37a726"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.412929 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76f8f940-670d-47a0-a90a-afd3aa37a726" (UID: "76f8f940-670d-47a0-a90a-afd3aa37a726"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.416781 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-config-data" (OuterVolumeSpecName: "config-data") pod "faeb9cb3-46ae-428f-8c0e-538a2e552072" (UID: "faeb9cb3-46ae-428f-8c0e-538a2e552072"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.476807 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.476836 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/76f8f940-670d-47a0-a90a-afd3aa37a726-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.476846 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.476855 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faeb9cb3-46ae-428f-8c0e-538a2e552072-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.476864 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.476872 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdgk6\" (UniqueName: \"kubernetes.io/projected/76f8f940-670d-47a0-a90a-afd3aa37a726-kube-api-access-qdgk6\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.476884 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txchs\" (UniqueName: \"kubernetes.io/projected/faeb9cb3-46ae-428f-8c0e-538a2e552072-kube-api-access-txchs\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.476893 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faeb9cb3-46ae-428f-8c0e-538a2e552072-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.950199 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6cvgs" event={"ID":"76f8f940-670d-47a0-a90a-afd3aa37a726","Type":"ContainerDied","Data":"856cfa1709bfc70905fa0560b8bcd9ee96d30c9ac3ff33d52f1608bcf34cd2fc"} Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.950603 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="856cfa1709bfc70905fa0560b8bcd9ee96d30c9ac3ff33d52f1608bcf34cd2fc" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.950237 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6cvgs" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.952295 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2b13038-d271-48f5-bd28-a38e2b9dff02","Type":"ContainerStarted","Data":"15eaae81b5ec94e32bcb75db667617fbe51c32c5f0cac153a8a191ff89576b97"} Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.958948 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac92f5c5-e457-4915-a919-0dbe3df23ce8","Type":"ContainerStarted","Data":"0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260"} Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.958987 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac92f5c5-e457-4915-a919-0dbe3df23ce8","Type":"ContainerStarted","Data":"de7bb235534c3c0c1a6530e35fd6d03d222f02129ca88b49fda3a8c136ab05b7"} Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.960836 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zr86r" event={"ID":"faeb9cb3-46ae-428f-8c0e-538a2e552072","Type":"ContainerDied","Data":"d453594d6992bec0b731b36d1124f474724ec877404950823baad33e6f3bbe34"} Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.960868 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d453594d6992bec0b731b36d1124f474724ec877404950823baad33e6f3bbe34" Mar 12 13:32:51 crc kubenswrapper[4778]: I0312 13:32:51.960973 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zr86r" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.075660 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-79ccdbbbbd-gl27l"] Mar 12 13:32:52 crc kubenswrapper[4778]: E0312 13:32:52.077141 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f8f940-670d-47a0-a90a-afd3aa37a726" containerName="neutron-db-sync" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.077157 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f8f940-670d-47a0-a90a-afd3aa37a726" containerName="neutron-db-sync" Mar 12 13:32:52 crc kubenswrapper[4778]: E0312 13:32:52.077199 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faeb9cb3-46ae-428f-8c0e-538a2e552072" containerName="placement-db-sync" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.077206 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="faeb9cb3-46ae-428f-8c0e-538a2e552072" containerName="placement-db-sync" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.077354 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f8f940-670d-47a0-a90a-afd3aa37a726" containerName="neutron-db-sync" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.077365 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="faeb9cb3-46ae-428f-8c0e-538a2e552072" containerName="placement-db-sync" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.078143 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.082238 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.082472 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.083706 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.083901 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-72bvj" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.084048 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.155223 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v2vtk"] Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.155543 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" podUID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerName="dnsmasq-dns" containerID="cri-o://cc6fc61a82e88c3140b3629f45196f98ee08d5f2fdb0df9b40fe66806a0ccbfd" gracePeriod=10 Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.197015 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-public-tls-certs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.197118 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68092e68-04e5-4530-8d94-859789faeb94-logs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.197142 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-config-data\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.197162 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrxw\" (UniqueName: \"kubernetes.io/projected/68092e68-04e5-4530-8d94-859789faeb94-kube-api-access-jnrxw\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.197203 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-internal-tls-certs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.197238 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-scripts\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.197515 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-combined-ca-bundle\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.303009 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-public-tls-certs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.303170 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68092e68-04e5-4530-8d94-859789faeb94-logs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.303213 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-config-data\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.303241 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrxw\" (UniqueName: \"kubernetes.io/projected/68092e68-04e5-4530-8d94-859789faeb94-kube-api-access-jnrxw\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.303277 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-internal-tls-certs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.303308 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-scripts\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.303356 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-combined-ca-bundle\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.304785 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68092e68-04e5-4530-8d94-859789faeb94-logs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.314079 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa998ea4-f50d-4441-b6ad-b160a19ea4a9" path="/var/lib/kubelet/pods/aa998ea4-f50d-4441-b6ad-b160a19ea4a9/volumes" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.314902 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79ccdbbbbd-gl27l"] Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.314927 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-r6j6b"] Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.316343 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.328677 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-combined-ca-bundle\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.347889 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-public-tls-certs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.348070 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-internal-tls-certs\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.348139 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-scripts\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.349926 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-config-data\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.356353 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-r6j6b"] Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.360769 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnrxw\" (UniqueName: \"kubernetes.io/projected/68092e68-04e5-4530-8d94-859789faeb94-kube-api-access-jnrxw\") pod \"placement-79ccdbbbbd-gl27l\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.430716 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-678c76989b-8x56d"] Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.432163 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.435166 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.454624 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.454815 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.454931 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d7pv5" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.455034 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515335 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-httpd-config\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515376 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515430 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-combined-ca-bundle\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515452 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgmt8\" (UniqueName: \"kubernetes.io/projected/e34be903-da25-4cdb-9298-2d53fdce0276-kube-api-access-cgmt8\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515472 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515520 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-config\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515541 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515607 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czvqb\" (UniqueName: \"kubernetes.io/projected/f26a6d05-e0ac-4f17-bcd9-fc011996b052-kube-api-access-czvqb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515631 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-ovndb-tls-certs\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515650 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-svc\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.515697 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-config\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.539292 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-678c76989b-8x56d"] Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.616801 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-config\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.616881 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-httpd-config\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.616904 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.616945 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-combined-ca-bundle\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.616965 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgmt8\" (UniqueName: \"kubernetes.io/projected/e34be903-da25-4cdb-9298-2d53fdce0276-kube-api-access-cgmt8\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.616982 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.617160 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-config\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.617220 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.617368 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czvqb\" (UniqueName: \"kubernetes.io/projected/f26a6d05-e0ac-4f17-bcd9-fc011996b052-kube-api-access-czvqb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.617396 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-ovndb-tls-certs\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.617411 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-svc\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.618535 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.618877 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-config\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.618911 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-svc\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.619129 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.619489 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.625482 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-config\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.628142 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-combined-ca-bundle\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.628674 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-ovndb-tls-certs\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.628767 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-httpd-config\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.642274 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgmt8\" (UniqueName: \"kubernetes.io/projected/e34be903-da25-4cdb-9298-2d53fdce0276-kube-api-access-cgmt8\") pod \"neutron-678c76989b-8x56d\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.643141 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czvqb\" (UniqueName: \"kubernetes.io/projected/f26a6d05-e0ac-4f17-bcd9-fc011996b052-kube-api-access-czvqb\") pod \"dnsmasq-dns-55f844cf75-r6j6b\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.828900 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.848622 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.973292 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2b13038-d271-48f5-bd28-a38e2b9dff02","Type":"ContainerStarted","Data":"cad2d2b9a9ac73ae35a814e1cadf9d57066e520b238036be878f7dfdb34aabb4"} Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.979833 4778 generic.go:334] "Generic (PLEG): container finished" podID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerID="cc6fc61a82e88c3140b3629f45196f98ee08d5f2fdb0df9b40fe66806a0ccbfd" exitCode=0 Mar 12 13:32:52 crc kubenswrapper[4778]: I0312 13:32:52.979876 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" event={"ID":"39bd75fd-958e-4b3b-abd5-860adf376fd7","Type":"ContainerDied","Data":"cc6fc61a82e88c3140b3629f45196f98ee08d5f2fdb0df9b40fe66806a0ccbfd"} Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.558602 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7955c84d65-qfgcn"] Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.560670 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.564998 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.565443 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.599446 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7955c84d65-qfgcn"] Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.662920 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-public-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.662983 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-config\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.663011 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-httpd-config\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.663103 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89hz\" (UniqueName: \"kubernetes.io/projected/d582b80a-57bd-4cd4-9e72-8a963cae187d-kube-api-access-h89hz\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.663167 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-combined-ca-bundle\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.663223 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-ovndb-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.663256 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-internal-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.764955 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-config\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.765012 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-httpd-config\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.765063 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h89hz\" (UniqueName: \"kubernetes.io/projected/d582b80a-57bd-4cd4-9e72-8a963cae187d-kube-api-access-h89hz\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.765099 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-combined-ca-bundle\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.765133 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-ovndb-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.765149 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-internal-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.765214 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-public-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.771767 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-public-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.772557 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-combined-ca-bundle\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.776990 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-internal-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.777054 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-config\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.778250 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-ovndb-tls-certs\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.782817 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-httpd-config\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.785444 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89hz\" (UniqueName: \"kubernetes.io/projected/d582b80a-57bd-4cd4-9e72-8a963cae187d-kube-api-access-h89hz\") pod \"neutron-7955c84d65-qfgcn\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:54 crc kubenswrapper[4778]: I0312 13:32:54.902972 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.038544 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-56sfj" event={"ID":"1af573ef-51c3-4bfc-8de6-eb1be8b75c76","Type":"ContainerDied","Data":"ebc59f76f06ba10050abce1212b94660f06fec69cccf436b21f8ae1838a2520b"} Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.038822 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebc59f76f06ba10050abce1212b94660f06fec69cccf436b21f8ae1838a2520b" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.041490 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" event={"ID":"39bd75fd-958e-4b3b-abd5-860adf376fd7","Type":"ContainerDied","Data":"415522e7cc2372bb11dfe09957497d4a3efac5b28086b59aebe2586918e3f99d"} Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.041511 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="415522e7cc2372bb11dfe09957497d4a3efac5b28086b59aebe2586918e3f99d" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.045590 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.052272 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.130256 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-scripts\") pod \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.130312 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-sb\") pod \"39bd75fd-958e-4b3b-abd5-860adf376fd7\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.130347 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-swift-storage-0\") pod \"39bd75fd-958e-4b3b-abd5-860adf376fd7\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132428 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-svc\") pod \"39bd75fd-958e-4b3b-abd5-860adf376fd7\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132510 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-config-data\") pod \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132561 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-credential-keys\") pod \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132591 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7vfh\" (UniqueName: \"kubernetes.io/projected/39bd75fd-958e-4b3b-abd5-860adf376fd7-kube-api-access-p7vfh\") pod \"39bd75fd-958e-4b3b-abd5-860adf376fd7\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132659 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-fernet-keys\") pod \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132711 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-config\") pod \"39bd75fd-958e-4b3b-abd5-860adf376fd7\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132755 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-nb\") pod \"39bd75fd-958e-4b3b-abd5-860adf376fd7\" (UID: \"39bd75fd-958e-4b3b-abd5-860adf376fd7\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132789 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94tqw\" (UniqueName: \"kubernetes.io/projected/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-kube-api-access-94tqw\") pod \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.132815 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-combined-ca-bundle\") pod \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\" (UID: \"1af573ef-51c3-4bfc-8de6-eb1be8b75c76\") " Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.155514 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-kube-api-access-94tqw" (OuterVolumeSpecName: "kube-api-access-94tqw") pod "1af573ef-51c3-4bfc-8de6-eb1be8b75c76" (UID: "1af573ef-51c3-4bfc-8de6-eb1be8b75c76"). InnerVolumeSpecName "kube-api-access-94tqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.156321 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-scripts" (OuterVolumeSpecName: "scripts") pod "1af573ef-51c3-4bfc-8de6-eb1be8b75c76" (UID: "1af573ef-51c3-4bfc-8de6-eb1be8b75c76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.161373 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1af573ef-51c3-4bfc-8de6-eb1be8b75c76" (UID: "1af573ef-51c3-4bfc-8de6-eb1be8b75c76"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.165696 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1af573ef-51c3-4bfc-8de6-eb1be8b75c76" (UID: "1af573ef-51c3-4bfc-8de6-eb1be8b75c76"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.171892 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bd75fd-958e-4b3b-abd5-860adf376fd7-kube-api-access-p7vfh" (OuterVolumeSpecName: "kube-api-access-p7vfh") pod "39bd75fd-958e-4b3b-abd5-860adf376fd7" (UID: "39bd75fd-958e-4b3b-abd5-860adf376fd7"). InnerVolumeSpecName "kube-api-access-p7vfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.212334 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-config-data" (OuterVolumeSpecName: "config-data") pod "1af573ef-51c3-4bfc-8de6-eb1be8b75c76" (UID: "1af573ef-51c3-4bfc-8de6-eb1be8b75c76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.219397 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1af573ef-51c3-4bfc-8de6-eb1be8b75c76" (UID: "1af573ef-51c3-4bfc-8de6-eb1be8b75c76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.221907 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "39bd75fd-958e-4b3b-abd5-860adf376fd7" (UID: "39bd75fd-958e-4b3b-abd5-860adf376fd7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.222138 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-config" (OuterVolumeSpecName: "config") pod "39bd75fd-958e-4b3b-abd5-860adf376fd7" (UID: "39bd75fd-958e-4b3b-abd5-860adf376fd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.231576 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39bd75fd-958e-4b3b-abd5-860adf376fd7" (UID: "39bd75fd-958e-4b3b-abd5-860adf376fd7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.235615 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.235779 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.235863 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.235942 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.236026 4778 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.236108 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7vfh\" (UniqueName: \"kubernetes.io/projected/39bd75fd-958e-4b3b-abd5-860adf376fd7-kube-api-access-p7vfh\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.236199 4778 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.236279 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.236371 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94tqw\" (UniqueName: \"kubernetes.io/projected/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-kube-api-access-94tqw\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.236447 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1af573ef-51c3-4bfc-8de6-eb1be8b75c76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.238569 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39bd75fd-958e-4b3b-abd5-860adf376fd7" (UID: "39bd75fd-958e-4b3b-abd5-860adf376fd7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.242494 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39bd75fd-958e-4b3b-abd5-860adf376fd7" (UID: "39bd75fd-958e-4b3b-abd5-860adf376fd7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.338823 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.338858 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bd75fd-958e-4b3b-abd5-860adf376fd7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.366477 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-r6j6b"] Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.439722 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79ccdbbbbd-gl27l"] Mar 12 13:32:58 crc kubenswrapper[4778]: W0312 13:32:58.446893 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68092e68_04e5_4530_8d94_859789faeb94.slice/crio-6225b0b7ab31929807b7000d1c797565cb38b8453f9487cc91d0a8fcf517ace6 WatchSource:0}: Error finding container 6225b0b7ab31929807b7000d1c797565cb38b8453f9487cc91d0a8fcf517ace6: Status 404 returned error can't find the container with id 6225b0b7ab31929807b7000d1c797565cb38b8453f9487cc91d0a8fcf517ace6 Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.534991 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-678c76989b-8x56d"] Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.557597 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.557658 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:32:58 crc kubenswrapper[4778]: I0312 13:32:58.805536 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7955c84d65-qfgcn"] Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.103171 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac92f5c5-e457-4915-a919-0dbe3df23ce8","Type":"ContainerStarted","Data":"7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f"} Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.105898 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7955c84d65-qfgcn" event={"ID":"d582b80a-57bd-4cd4-9e72-8a963cae187d","Type":"ContainerStarted","Data":"284482a4b85498fbfd683802fcf5305643f5a4cf33d63effbb2a1f2fd1071a11"} Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.112348 4778 generic.go:334] "Generic (PLEG): container finished" podID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" containerID="663c434423b37b8d735c566ad324f30f6c179866c4697ae6a88fd9aeb0c4709a" exitCode=0 Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.112405 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" event={"ID":"f26a6d05-e0ac-4f17-bcd9-fc011996b052","Type":"ContainerDied","Data":"663c434423b37b8d735c566ad324f30f6c179866c4697ae6a88fd9aeb0c4709a"} Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.112424 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" event={"ID":"f26a6d05-e0ac-4f17-bcd9-fc011996b052","Type":"ContainerStarted","Data":"eedcc18be187ca3b0fbc761493f2664ac917f21e94152db48e6204214d9b050b"} Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.130640 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.130618976 podStartE2EDuration="10.130618976s" podCreationTimestamp="2026-03-12 13:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:59.127205399 +0000 UTC m=+1397.575900795" watchObservedRunningTime="2026-03-12 13:32:59.130618976 +0000 UTC m=+1397.579314372" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.133049 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678c76989b-8x56d" event={"ID":"e34be903-da25-4cdb-9298-2d53fdce0276","Type":"ContainerStarted","Data":"73ff3b874391ffdc31812d5d85f13741c2920b13dddb21f9bdace835187b0822"} Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.157494 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79ccdbbbbd-gl27l" event={"ID":"68092e68-04e5-4530-8d94-859789faeb94","Type":"ContainerStarted","Data":"be846a255557e511860dc7bc1b884d65bc6e48bfb1b98ae1316cb74617623c2b"} Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.157547 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79ccdbbbbd-gl27l" event={"ID":"68092e68-04e5-4530-8d94-859789faeb94","Type":"ContainerStarted","Data":"6225b0b7ab31929807b7000d1c797565cb38b8453f9487cc91d0a8fcf517ace6"} Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.169150 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69b6dc4885-6lrlq"] Mar 12 13:32:59 crc kubenswrapper[4778]: E0312 13:32:59.169898 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af573ef-51c3-4bfc-8de6-eb1be8b75c76" containerName="keystone-bootstrap" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.169924 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af573ef-51c3-4bfc-8de6-eb1be8b75c76" containerName="keystone-bootstrap" Mar 12 13:32:59 crc kubenswrapper[4778]: E0312 13:32:59.169960 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerName="init" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.169969 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerName="init" Mar 12 13:32:59 crc kubenswrapper[4778]: E0312 13:32:59.169992 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerName="dnsmasq-dns" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.170000 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerName="dnsmasq-dns" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.170248 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerName="dnsmasq-dns" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.170271 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af573ef-51c3-4bfc-8de6-eb1be8b75c76" containerName="keystone-bootstrap" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.171163 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.178922 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.179247 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.184870 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-56sfj" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.185141 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2b13038-d271-48f5-bd28-a38e2b9dff02","Type":"ContainerStarted","Data":"d321738b43c55df790b0a01418c177d18aaa7772e4cf7fca03bdeedb1c32e127"} Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.185299 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.188472 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69b6dc4885-6lrlq"] Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.267217 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.26717709 podStartE2EDuration="9.26717709s" podCreationTimestamp="2026-03-12 13:32:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:32:59.251951418 +0000 UTC m=+1397.700646824" watchObservedRunningTime="2026-03-12 13:32:59.26717709 +0000 UTC m=+1397.715872486" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.285253 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-credential-keys\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.285327 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-config-data\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.285354 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-public-tls-certs\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.285377 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-scripts\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.285439 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-internal-tls-certs\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.285475 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnjm\" (UniqueName: \"kubernetes.io/projected/a56bb599-f10d-4564-b6bf-48128dc2c7f1-kube-api-access-bbnjm\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.285583 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-fernet-keys\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.285680 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-combined-ca-bundle\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.313747 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v2vtk"] Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.330317 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-v2vtk"] Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.387372 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-combined-ca-bundle\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.387447 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-credential-keys\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.387478 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-config-data\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.387497 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-public-tls-certs\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.387513 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-scripts\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.387541 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-internal-tls-certs\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.387574 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnjm\" (UniqueName: \"kubernetes.io/projected/a56bb599-f10d-4564-b6bf-48128dc2c7f1-kube-api-access-bbnjm\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.387625 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-fernet-keys\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.392212 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-fernet-keys\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.403632 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-internal-tls-certs\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.404497 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-public-tls-certs\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.404633 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-credential-keys\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.409726 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-scripts\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.411777 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-combined-ca-bundle\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.416594 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56bb599-f10d-4564-b6bf-48128dc2c7f1-config-data\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.424205 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnjm\" (UniqueName: \"kubernetes.io/projected/a56bb599-f10d-4564-b6bf-48128dc2c7f1-kube-api-access-bbnjm\") pod \"keystone-69b6dc4885-6lrlq\" (UID: \"a56bb599-f10d-4564-b6bf-48128dc2c7f1\") " pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:32:59 crc kubenswrapper[4778]: I0312 13:32:59.551646 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.027170 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-v2vtk" podUID="39bd75fd-958e-4b3b-abd5-860adf376fd7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: i/o timeout" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.061013 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69b6dc4885-6lrlq"] Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.204208 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79ccdbbbbd-gl27l" event={"ID":"68092e68-04e5-4530-8d94-859789faeb94","Type":"ContainerStarted","Data":"8cdda802eadd8c68b3ba4b5b69b6a0fd021902af043f1083daaae42e4e3ba4bc"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.204595 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.204609 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.216480 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerStarted","Data":"78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.218562 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69b6dc4885-6lrlq" event={"ID":"a56bb599-f10d-4564-b6bf-48128dc2c7f1","Type":"ContainerStarted","Data":"f2bbb1c1d63926d49e0698557adc54b702c70a83918c69d3788645fc858ad68a"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.220345 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p59s9" event={"ID":"a682334f-73c0-4e38-8f95-e5de661319bb","Type":"ContainerStarted","Data":"5a74043e2f16e3024a4f2ed6f0c9502985ad493a8f1362a42f34265b2e50d313"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.222854 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7955c84d65-qfgcn" event={"ID":"d582b80a-57bd-4cd4-9e72-8a963cae187d","Type":"ContainerStarted","Data":"71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.222897 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7955c84d65-qfgcn" event={"ID":"d582b80a-57bd-4cd4-9e72-8a963cae187d","Type":"ContainerStarted","Data":"938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.223081 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.227281 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-79ccdbbbbd-gl27l" podStartSLOduration=8.227270824 podStartE2EDuration="8.227270824s" podCreationTimestamp="2026-03-12 13:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:00.222667823 +0000 UTC m=+1398.671363249" watchObservedRunningTime="2026-03-12 13:33:00.227270824 +0000 UTC m=+1398.675966220" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.230163 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" event={"ID":"f26a6d05-e0ac-4f17-bcd9-fc011996b052","Type":"ContainerStarted","Data":"601d1f3845ef933f076859b186d3267da0a7df161ebfa472c4f14f7e9cbd4ec0"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.230403 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.235668 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678c76989b-8x56d" event={"ID":"e34be903-da25-4cdb-9298-2d53fdce0276","Type":"ContainerStarted","Data":"7423051fcfb7c12e56b049e90be94c641f82520ceab5181c7fcca6713588c77f"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.235717 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678c76989b-8x56d" event={"ID":"e34be903-da25-4cdb-9298-2d53fdce0276","Type":"ContainerStarted","Data":"76d710be6da7b239e82f6228977b9799ccd95f2824b23913a0585897e926dd74"} Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.237574 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.238385 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-p59s9" podStartSLOduration=3.216684908 podStartE2EDuration="40.238373489s" podCreationTimestamp="2026-03-12 13:32:20 +0000 UTC" firstStartedPulling="2026-03-12 13:32:21.924254977 +0000 UTC m=+1360.372950373" lastFinishedPulling="2026-03-12 13:32:58.945943558 +0000 UTC m=+1397.394638954" observedRunningTime="2026-03-12 13:33:00.236793364 +0000 UTC m=+1398.685488760" watchObservedRunningTime="2026-03-12 13:33:00.238373489 +0000 UTC m=+1398.687068885" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.266949 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" podStartSLOduration=8.266930899 podStartE2EDuration="8.266930899s" podCreationTimestamp="2026-03-12 13:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:00.260064924 +0000 UTC m=+1398.708760330" watchObservedRunningTime="2026-03-12 13:33:00.266930899 +0000 UTC m=+1398.715626295" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.275921 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bd75fd-958e-4b3b-abd5-860adf376fd7" path="/var/lib/kubelet/pods/39bd75fd-958e-4b3b-abd5-860adf376fd7/volumes" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.300014 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-678c76989b-8x56d" podStartSLOduration=8.299998197 podStartE2EDuration="8.299998197s" podCreationTimestamp="2026-03-12 13:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:00.299207855 +0000 UTC m=+1398.747903251" watchObservedRunningTime="2026-03-12 13:33:00.299998197 +0000 UTC m=+1398.748693593" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.322629 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7955c84d65-qfgcn" podStartSLOduration=6.322604088 podStartE2EDuration="6.322604088s" podCreationTimestamp="2026-03-12 13:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:00.318874842 +0000 UTC m=+1398.767570238" watchObservedRunningTime="2026-03-12 13:33:00.322604088 +0000 UTC m=+1398.771299474" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.602338 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.602396 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.640064 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.644315 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.644396 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.661610 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.706509 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:00 crc kubenswrapper[4778]: I0312 13:33:00.710485 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:01 crc kubenswrapper[4778]: I0312 13:33:01.251448 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69b6dc4885-6lrlq" event={"ID":"a56bb599-f10d-4564-b6bf-48128dc2c7f1","Type":"ContainerStarted","Data":"c264a6d2fe3e794592b5aa0308d5c7aff717ee4b7a8d492b2d37e4ff699f5b25"} Mar 12 13:33:01 crc kubenswrapper[4778]: I0312 13:33:01.252853 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 13:33:01 crc kubenswrapper[4778]: I0312 13:33:01.252898 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:01 crc kubenswrapper[4778]: I0312 13:33:01.252918 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:33:01 crc kubenswrapper[4778]: I0312 13:33:01.252935 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:01 crc kubenswrapper[4778]: I0312 13:33:01.252950 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 13:33:01 crc kubenswrapper[4778]: I0312 13:33:01.289352 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69b6dc4885-6lrlq" podStartSLOduration=2.289329429 podStartE2EDuration="2.289329429s" podCreationTimestamp="2026-03-12 13:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:01.280916831 +0000 UTC m=+1399.729612237" watchObservedRunningTime="2026-03-12 13:33:01.289329429 +0000 UTC m=+1399.738024815" Mar 12 13:33:02 crc kubenswrapper[4778]: I0312 13:33:02.264786 4778 generic.go:334] "Generic (PLEG): container finished" podID="a682334f-73c0-4e38-8f95-e5de661319bb" containerID="5a74043e2f16e3024a4f2ed6f0c9502985ad493a8f1362a42f34265b2e50d313" exitCode=0 Mar 12 13:33:02 crc kubenswrapper[4778]: I0312 13:33:02.271333 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p59s9" event={"ID":"a682334f-73c0-4e38-8f95-e5de661319bb","Type":"ContainerDied","Data":"5a74043e2f16e3024a4f2ed6f0c9502985ad493a8f1362a42f34265b2e50d313"} Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.274518 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d5pl9" event={"ID":"bb110a1e-6281-437d-b857-eb79c4953e1a","Type":"ContainerStarted","Data":"4711a6f852c8bf6a8fa62e985008d918b7971ec55784fb38d2f086199f1f3aee"} Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.274556 4778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.274545 4778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.298966 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-d5pl9" podStartSLOduration=2.795487831 podStartE2EDuration="43.298948744s" podCreationTimestamp="2026-03-12 13:32:20 +0000 UTC" firstStartedPulling="2026-03-12 13:32:21.516338667 +0000 UTC m=+1359.965034053" lastFinishedPulling="2026-03-12 13:33:02.01979957 +0000 UTC m=+1400.468494966" observedRunningTime="2026-03-12 13:33:03.292950454 +0000 UTC m=+1401.741645870" watchObservedRunningTime="2026-03-12 13:33:03.298948744 +0000 UTC m=+1401.747644160" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.335309 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.459024 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.718227 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p59s9" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.807853 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-db-sync-config-data\") pod \"a682334f-73c0-4e38-8f95-e5de661319bb\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.807988 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-combined-ca-bundle\") pod \"a682334f-73c0-4e38-8f95-e5de661319bb\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.808017 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5hmn\" (UniqueName: \"kubernetes.io/projected/a682334f-73c0-4e38-8f95-e5de661319bb-kube-api-access-r5hmn\") pod \"a682334f-73c0-4e38-8f95-e5de661319bb\" (UID: \"a682334f-73c0-4e38-8f95-e5de661319bb\") " Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.834430 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a682334f-73c0-4e38-8f95-e5de661319bb" (UID: "a682334f-73c0-4e38-8f95-e5de661319bb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.834678 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a682334f-73c0-4e38-8f95-e5de661319bb-kube-api-access-r5hmn" (OuterVolumeSpecName: "kube-api-access-r5hmn") pod "a682334f-73c0-4e38-8f95-e5de661319bb" (UID: "a682334f-73c0-4e38-8f95-e5de661319bb"). InnerVolumeSpecName "kube-api-access-r5hmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.840168 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a682334f-73c0-4e38-8f95-e5de661319bb" (UID: "a682334f-73c0-4e38-8f95-e5de661319bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.918245 4778 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.918284 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a682334f-73c0-4e38-8f95-e5de661319bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:03 crc kubenswrapper[4778]: I0312 13:33:03.918295 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5hmn\" (UniqueName: \"kubernetes.io/projected/a682334f-73c0-4e38-8f95-e5de661319bb-kube-api-access-r5hmn\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.288788 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p59s9" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.288905 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p59s9" event={"ID":"a682334f-73c0-4e38-8f95-e5de661319bb","Type":"ContainerDied","Data":"9752a8239a23597303e4c0af125d25d5be143749ecb830c3912a0cbc8277763f"} Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.288954 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9752a8239a23597303e4c0af125d25d5be143749ecb830c3912a0cbc8277763f" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.522308 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7dcf9787-ngc87"] Mar 12 13:33:04 crc kubenswrapper[4778]: E0312 13:33:04.522665 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a682334f-73c0-4e38-8f95-e5de661319bb" containerName="barbican-db-sync" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.522681 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a682334f-73c0-4e38-8f95-e5de661319bb" containerName="barbican-db-sync" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.522875 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a682334f-73c0-4e38-8f95-e5de661319bb" containerName="barbican-db-sync" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.523728 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.526526 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.526607 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.527279 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-85xbx" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.604248 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dcf9787-ngc87"] Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.616957 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65c9994dfd-xznqh"] Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.626281 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.628673 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.637241 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-config-data-custom\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.637483 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d505bb59-3c9e-4cfa-891c-c8e0068e2567-logs\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.637666 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-combined-ca-bundle\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.637805 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-config-data\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.637914 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbbw\" (UniqueName: \"kubernetes.io/projected/d505bb59-3c9e-4cfa-891c-c8e0068e2567-kube-api-access-zxbbw\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.652114 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65c9994dfd-xznqh"] Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740330 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-combined-ca-bundle\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740406 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-config-data-custom\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740448 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-combined-ca-bundle\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740484 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-config-data\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740530 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee1f546-8428-4b23-93e4-b8370fd4224b-logs\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740557 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-config-data\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740581 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxbbw\" (UniqueName: \"kubernetes.io/projected/d505bb59-3c9e-4cfa-891c-c8e0068e2567-kube-api-access-zxbbw\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740632 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-config-data-custom\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740669 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d505bb59-3c9e-4cfa-891c-c8e0068e2567-logs\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.740716 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqrsf\" (UniqueName: \"kubernetes.io/projected/8ee1f546-8428-4b23-93e4-b8370fd4224b-kube-api-access-qqrsf\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.742293 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d505bb59-3c9e-4cfa-891c-c8e0068e2567-logs\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.746207 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-config-data-custom\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.751358 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-config-data\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.756381 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-r6j6b"] Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.756618 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" podUID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" containerName="dnsmasq-dns" containerID="cri-o://601d1f3845ef933f076859b186d3267da0a7df161ebfa472c4f14f7e9cbd4ec0" gracePeriod=10 Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.762133 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.780602 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d505bb59-3c9e-4cfa-891c-c8e0068e2567-combined-ca-bundle\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.782383 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pckv7"] Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.786959 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.805499 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxbbw\" (UniqueName: \"kubernetes.io/projected/d505bb59-3c9e-4cfa-891c-c8e0068e2567-kube-api-access-zxbbw\") pod \"barbican-worker-7dcf9787-ngc87\" (UID: \"d505bb59-3c9e-4cfa-891c-c8e0068e2567\") " pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.832248 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pckv7"] Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.842434 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee1f546-8428-4b23-93e4-b8370fd4224b-logs\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.842496 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-config-data\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.842564 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqrsf\" (UniqueName: \"kubernetes.io/projected/8ee1f546-8428-4b23-93e4-b8370fd4224b-kube-api-access-qqrsf\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.842621 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-combined-ca-bundle\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.842645 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-config-data-custom\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.860615 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-config-data-custom\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.861505 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee1f546-8428-4b23-93e4-b8370fd4224b-logs\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.861532 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dcf9787-ngc87" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.862343 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-combined-ca-bundle\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.886019 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee1f546-8428-4b23-93e4-b8370fd4224b-config-data\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.915990 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqrsf\" (UniqueName: \"kubernetes.io/projected/8ee1f546-8428-4b23-93e4-b8370fd4224b-kube-api-access-qqrsf\") pod \"barbican-keystone-listener-65c9994dfd-xznqh\" (UID: \"8ee1f546-8428-4b23-93e4-b8370fd4224b\") " pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.937659 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f884f5564-dxzpv"] Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.939743 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.947827 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.947872 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.947905 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8tvn\" (UniqueName: \"kubernetes.io/projected/0c667b0e-f02d-4e71-959f-5d24b702bd73-kube-api-access-j8tvn\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.947965 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.948002 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-config\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.948022 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.949428 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.961616 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" Mar 12 13:33:04 crc kubenswrapper[4778]: I0312 13:33:04.971904 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f884f5564-dxzpv"] Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.050104 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-combined-ca-bundle\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.050893 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.050929 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czsp7\" (UniqueName: \"kubernetes.io/projected/ef2e3c21-ccc6-4dcc-a476-7393bb481441-kube-api-access-czsp7\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.050968 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.050994 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-config\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.051021 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.051076 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data-custom\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.051122 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef2e3c21-ccc6-4dcc-a476-7393bb481441-logs\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.051158 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.051177 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.051231 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8tvn\" (UniqueName: \"kubernetes.io/projected/0c667b0e-f02d-4e71-959f-5d24b702bd73-kube-api-access-j8tvn\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.052424 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-config\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.052841 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.052963 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.053144 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.053425 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.086069 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8tvn\" (UniqueName: \"kubernetes.io/projected/0c667b0e-f02d-4e71-959f-5d24b702bd73-kube-api-access-j8tvn\") pod \"dnsmasq-dns-85ff748b95-pckv7\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.107638 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.153089 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data-custom\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.153150 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef2e3c21-ccc6-4dcc-a476-7393bb481441-logs\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.153246 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-combined-ca-bundle\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.153277 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czsp7\" (UniqueName: \"kubernetes.io/projected/ef2e3c21-ccc6-4dcc-a476-7393bb481441-kube-api-access-czsp7\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.153305 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.154972 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef2e3c21-ccc6-4dcc-a476-7393bb481441-logs\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.162321 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data-custom\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.163244 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.167001 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-combined-ca-bundle\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.176233 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czsp7\" (UniqueName: \"kubernetes.io/projected/ef2e3c21-ccc6-4dcc-a476-7393bb481441-kube-api-access-czsp7\") pod \"barbican-api-5f884f5564-dxzpv\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.421014 4778 generic.go:334] "Generic (PLEG): container finished" podID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" containerID="601d1f3845ef933f076859b186d3267da0a7df161ebfa472c4f14f7e9cbd4ec0" exitCode=0 Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.421065 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" event={"ID":"f26a6d05-e0ac-4f17-bcd9-fc011996b052","Type":"ContainerDied","Data":"601d1f3845ef933f076859b186d3267da0a7df161ebfa472c4f14f7e9cbd4ec0"} Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.427228 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.715875 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dcf9787-ngc87"] Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.944343 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:33:05 crc kubenswrapper[4778]: I0312 13:33:05.950121 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65c9994dfd-xznqh"] Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.047665 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-nb\") pod \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.048418 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czvqb\" (UniqueName: \"kubernetes.io/projected/f26a6d05-e0ac-4f17-bcd9-fc011996b052-kube-api-access-czvqb\") pod \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.048530 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-swift-storage-0\") pod \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.048566 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-config\") pod \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.048626 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-sb\") pod \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.048665 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-svc\") pod \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\" (UID: \"f26a6d05-e0ac-4f17-bcd9-fc011996b052\") " Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.075450 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26a6d05-e0ac-4f17-bcd9-fc011996b052-kube-api-access-czvqb" (OuterVolumeSpecName: "kube-api-access-czvqb") pod "f26a6d05-e0ac-4f17-bcd9-fc011996b052" (UID: "f26a6d05-e0ac-4f17-bcd9-fc011996b052"). InnerVolumeSpecName "kube-api-access-czvqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.113167 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-config" (OuterVolumeSpecName: "config") pod "f26a6d05-e0ac-4f17-bcd9-fc011996b052" (UID: "f26a6d05-e0ac-4f17-bcd9-fc011996b052"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.120680 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f884f5564-dxzpv"] Mar 12 13:33:06 crc kubenswrapper[4778]: W0312 13:33:06.124597 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef2e3c21_ccc6_4dcc_a476_7393bb481441.slice/crio-91360286d4706715ddbf7b7dd1e71ab18f2b12552f2316ff72136087f9c79c95 WatchSource:0}: Error finding container 91360286d4706715ddbf7b7dd1e71ab18f2b12552f2316ff72136087f9c79c95: Status 404 returned error can't find the container with id 91360286d4706715ddbf7b7dd1e71ab18f2b12552f2316ff72136087f9c79c95 Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.129652 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f26a6d05-e0ac-4f17-bcd9-fc011996b052" (UID: "f26a6d05-e0ac-4f17-bcd9-fc011996b052"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.138923 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f26a6d05-e0ac-4f17-bcd9-fc011996b052" (UID: "f26a6d05-e0ac-4f17-bcd9-fc011996b052"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.140478 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f26a6d05-e0ac-4f17-bcd9-fc011996b052" (UID: "f26a6d05-e0ac-4f17-bcd9-fc011996b052"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.151208 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czvqb\" (UniqueName: \"kubernetes.io/projected/f26a6d05-e0ac-4f17-bcd9-fc011996b052-kube-api-access-czvqb\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.151242 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.151255 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.151264 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.151272 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.156365 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f26a6d05-e0ac-4f17-bcd9-fc011996b052" (UID: "f26a6d05-e0ac-4f17-bcd9-fc011996b052"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.169289 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pckv7"] Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.253291 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f26a6d05-e0ac-4f17-bcd9-fc011996b052-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.382286 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.454144 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.454548 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dcf9787-ngc87" event={"ID":"d505bb59-3c9e-4cfa-891c-c8e0068e2567","Type":"ContainerStarted","Data":"50ae9b33b5ff909778b51cf71217aac63884ec464e0459f118e6c5dcbc107d88"} Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.488671 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f884f5564-dxzpv" event={"ID":"ef2e3c21-ccc6-4dcc-a476-7393bb481441","Type":"ContainerStarted","Data":"555085059a0c8494fcbd31c46657e06bdebc21317a675fa20661619d5dc02586"} Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.488717 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f884f5564-dxzpv" event={"ID":"ef2e3c21-ccc6-4dcc-a476-7393bb481441","Type":"ContainerStarted","Data":"91360286d4706715ddbf7b7dd1e71ab18f2b12552f2316ff72136087f9c79c95"} Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.522100 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" event={"ID":"f26a6d05-e0ac-4f17-bcd9-fc011996b052","Type":"ContainerDied","Data":"eedcc18be187ca3b0fbc761493f2664ac917f21e94152db48e6204214d9b050b"} Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.522151 4778 scope.go:117] "RemoveContainer" containerID="601d1f3845ef933f076859b186d3267da0a7df161ebfa472c4f14f7e9cbd4ec0" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.522350 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-r6j6b" Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.525686 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" event={"ID":"0c667b0e-f02d-4e71-959f-5d24b702bd73","Type":"ContainerStarted","Data":"3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484"} Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.525719 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" event={"ID":"0c667b0e-f02d-4e71-959f-5d24b702bd73","Type":"ContainerStarted","Data":"155ae76e9da40b17013784a015c926481ecd3a26d03501642e4e09f600be7598"} Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.532066 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" event={"ID":"8ee1f546-8428-4b23-93e4-b8370fd4224b","Type":"ContainerStarted","Data":"2255414430f846d703036cabdb509256e41f80ca7e9c35ed9c2c678cad9afbb7"} Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.606907 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-r6j6b"] Mar 12 13:33:06 crc kubenswrapper[4778]: I0312 13:33:06.625520 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-r6j6b"] Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.549288 4778 generic.go:334] "Generic (PLEG): container finished" podID="0c667b0e-f02d-4e71-959f-5d24b702bd73" containerID="3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484" exitCode=0 Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.549391 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" event={"ID":"0c667b0e-f02d-4e71-959f-5d24b702bd73","Type":"ContainerDied","Data":"3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484"} Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.826226 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-86cb765474-5pq5z"] Mar 12 13:33:07 crc kubenswrapper[4778]: E0312 13:33:07.826597 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" containerName="dnsmasq-dns" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.826611 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" containerName="dnsmasq-dns" Mar 12 13:33:07 crc kubenswrapper[4778]: E0312 13:33:07.826621 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" containerName="init" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.826627 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" containerName="init" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.826837 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" containerName="dnsmasq-dns" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.828295 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.830791 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.832628 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.852312 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86cb765474-5pq5z"] Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.899919 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-internal-tls-certs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.899966 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-config-data-custom\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.899988 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-public-tls-certs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.900010 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-config-data\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.900076 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75gk9\" (UniqueName: \"kubernetes.io/projected/6bd172c5-383f-4273-98a5-2c92223dc765-kube-api-access-75gk9\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.900141 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-combined-ca-bundle\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:07 crc kubenswrapper[4778]: I0312 13:33:07.900171 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd172c5-383f-4273-98a5-2c92223dc765-logs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.002088 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75gk9\" (UniqueName: \"kubernetes.io/projected/6bd172c5-383f-4273-98a5-2c92223dc765-kube-api-access-75gk9\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.002194 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-combined-ca-bundle\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.002230 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd172c5-383f-4273-98a5-2c92223dc765-logs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.002269 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-internal-tls-certs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.002289 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-config-data-custom\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.002305 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-public-tls-certs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.002326 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-config-data\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.002793 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bd172c5-383f-4273-98a5-2c92223dc765-logs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.010872 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-config-data\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.011597 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-combined-ca-bundle\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.012347 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-config-data-custom\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.015711 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-internal-tls-certs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.028657 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75gk9\" (UniqueName: \"kubernetes.io/projected/6bd172c5-383f-4273-98a5-2c92223dc765-kube-api-access-75gk9\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.033823 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd172c5-383f-4273-98a5-2c92223dc765-public-tls-certs\") pod \"barbican-api-86cb765474-5pq5z\" (UID: \"6bd172c5-383f-4273-98a5-2c92223dc765\") " pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.146891 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:08 crc kubenswrapper[4778]: I0312 13:33:08.286326 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26a6d05-e0ac-4f17-bcd9-fc011996b052" path="/var/lib/kubelet/pods/f26a6d05-e0ac-4f17-bcd9-fc011996b052/volumes" Mar 12 13:33:11 crc kubenswrapper[4778]: I0312 13:33:11.588959 4778 generic.go:334] "Generic (PLEG): container finished" podID="bb110a1e-6281-437d-b857-eb79c4953e1a" containerID="4711a6f852c8bf6a8fa62e985008d918b7971ec55784fb38d2f086199f1f3aee" exitCode=0 Mar 12 13:33:11 crc kubenswrapper[4778]: I0312 13:33:11.589059 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d5pl9" event={"ID":"bb110a1e-6281-437d-b857-eb79c4953e1a","Type":"ContainerDied","Data":"4711a6f852c8bf6a8fa62e985008d918b7971ec55784fb38d2f086199f1f3aee"} Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.144549 4778 scope.go:117] "RemoveContainer" containerID="663c434423b37b8d735c566ad324f30f6c179866c4697ae6a88fd9aeb0c4709a" Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.600557 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f884f5564-dxzpv" event={"ID":"ef2e3c21-ccc6-4dcc-a476-7393bb481441","Type":"ContainerStarted","Data":"c0edf91d21f7ba54f7ae8ead172101f785145fe82241acf1f7236f38396130a9"} Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.600921 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.602251 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f884f5564-dxzpv" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.606545 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" event={"ID":"0c667b0e-f02d-4e71-959f-5d24b702bd73","Type":"ContainerStarted","Data":"12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e"} Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.606672 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.609803 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dcf9787-ngc87" event={"ID":"d505bb59-3c9e-4cfa-891c-c8e0068e2567","Type":"ContainerStarted","Data":"c64ec1f96cc1fcd84def68a1a08ab9a56b79de4d573fa1686270714d0b8a5ddc"} Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.652041 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f884f5564-dxzpv" podStartSLOduration=8.652019412 podStartE2EDuration="8.652019412s" podCreationTimestamp="2026-03-12 13:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:12.623300927 +0000 UTC m=+1411.071996343" watchObservedRunningTime="2026-03-12 13:33:12.652019412 +0000 UTC m=+1411.100714808" Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.653846 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86cb765474-5pq5z"] Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.657333 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" podStartSLOduration=8.657323222 podStartE2EDuration="8.657323222s" podCreationTimestamp="2026-03-12 13:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:12.642881492 +0000 UTC m=+1411.091576898" watchObservedRunningTime="2026-03-12 13:33:12.657323222 +0000 UTC m=+1411.106018618" Mar 12 13:33:12 crc kubenswrapper[4778]: W0312 13:33:12.739927 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bd172c5_383f_4273_98a5_2c92223dc765.slice/crio-c60e87485b8c433cd0ae87ab204a606d44fb181b69fc47c512c8260657aee02f WatchSource:0}: Error finding container c60e87485b8c433cd0ae87ab204a606d44fb181b69fc47c512c8260657aee02f: Status 404 returned error can't find the container with id c60e87485b8c433cd0ae87ab204a606d44fb181b69fc47c512c8260657aee02f Mar 12 13:33:12 crc kubenswrapper[4778]: I0312 13:33:12.917101 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.047263 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-combined-ca-bundle\") pod \"bb110a1e-6281-437d-b857-eb79c4953e1a\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.047726 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-config-data\") pod \"bb110a1e-6281-437d-b857-eb79c4953e1a\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.047795 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-db-sync-config-data\") pod \"bb110a1e-6281-437d-b857-eb79c4953e1a\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.047849 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-scripts\") pod \"bb110a1e-6281-437d-b857-eb79c4953e1a\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.047884 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb110a1e-6281-437d-b857-eb79c4953e1a-etc-machine-id\") pod \"bb110a1e-6281-437d-b857-eb79c4953e1a\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.047909 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpfhh\" (UniqueName: \"kubernetes.io/projected/bb110a1e-6281-437d-b857-eb79c4953e1a-kube-api-access-jpfhh\") pod \"bb110a1e-6281-437d-b857-eb79c4953e1a\" (UID: \"bb110a1e-6281-437d-b857-eb79c4953e1a\") " Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.048459 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb110a1e-6281-437d-b857-eb79c4953e1a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bb110a1e-6281-437d-b857-eb79c4953e1a" (UID: "bb110a1e-6281-437d-b857-eb79c4953e1a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.052565 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-scripts" (OuterVolumeSpecName: "scripts") pod "bb110a1e-6281-437d-b857-eb79c4953e1a" (UID: "bb110a1e-6281-437d-b857-eb79c4953e1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.052604 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb110a1e-6281-437d-b857-eb79c4953e1a-kube-api-access-jpfhh" (OuterVolumeSpecName: "kube-api-access-jpfhh") pod "bb110a1e-6281-437d-b857-eb79c4953e1a" (UID: "bb110a1e-6281-437d-b857-eb79c4953e1a"). InnerVolumeSpecName "kube-api-access-jpfhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.053739 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bb110a1e-6281-437d-b857-eb79c4953e1a" (UID: "bb110a1e-6281-437d-b857-eb79c4953e1a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.078585 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb110a1e-6281-437d-b857-eb79c4953e1a" (UID: "bb110a1e-6281-437d-b857-eb79c4953e1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.108474 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-config-data" (OuterVolumeSpecName: "config-data") pod "bb110a1e-6281-437d-b857-eb79c4953e1a" (UID: "bb110a1e-6281-437d-b857-eb79c4953e1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.150506 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.150547 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.150556 4778 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.150567 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb110a1e-6281-437d-b857-eb79c4953e1a-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.150577 4778 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb110a1e-6281-437d-b857-eb79c4953e1a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.150585 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpfhh\" (UniqueName: \"kubernetes.io/projected/bb110a1e-6281-437d-b857-eb79c4953e1a-kube-api-access-jpfhh\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.623094 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" event={"ID":"8ee1f546-8428-4b23-93e4-b8370fd4224b","Type":"ContainerStarted","Data":"614728f1eac6b8691120431d6140445a3ccbe27708df146ad3e25503f01c0604"} Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.623140 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" event={"ID":"8ee1f546-8428-4b23-93e4-b8370fd4224b","Type":"ContainerStarted","Data":"014d4595f50991485d408457e25c7f3fa72ebe29191214a4e0fed5284e25223e"} Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.625776 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dcf9787-ngc87" event={"ID":"d505bb59-3c9e-4cfa-891c-c8e0068e2567","Type":"ContainerStarted","Data":"27318f7e7450cb0dc3ae3f744d5bacf9c41e8a67f279e8cd3f34ad4fb6cb4753"} Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.628695 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerStarted","Data":"53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce"} Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.628880 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="ceilometer-central-agent" containerID="cri-o://a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8" gracePeriod=30 Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.628896 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.628966 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="proxy-httpd" containerID="cri-o://53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce" gracePeriod=30 Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.628994 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="ceilometer-notification-agent" containerID="cri-o://f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0" gracePeriod=30 Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.629092 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="sg-core" containerID="cri-o://78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70" gracePeriod=30 Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.637969 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d5pl9" event={"ID":"bb110a1e-6281-437d-b857-eb79c4953e1a","Type":"ContainerDied","Data":"8d37cd44357eb35c5c4917c8593f7e9902991ee071e5d92e025804bd35c2f76e"} Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.638011 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d37cd44357eb35c5c4917c8593f7e9902991ee071e5d92e025804bd35c2f76e" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.638064 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d5pl9" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.656376 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65c9994dfd-xznqh" podStartSLOduration=2.801382524 podStartE2EDuration="9.656361911s" podCreationTimestamp="2026-03-12 13:33:04 +0000 UTC" firstStartedPulling="2026-03-12 13:33:05.97173962 +0000 UTC m=+1404.420435016" lastFinishedPulling="2026-03-12 13:33:12.826719007 +0000 UTC m=+1411.275414403" observedRunningTime="2026-03-12 13:33:13.652749218 +0000 UTC m=+1412.101444614" watchObservedRunningTime="2026-03-12 13:33:13.656361911 +0000 UTC m=+1412.105057307" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.656757 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86cb765474-5pq5z" event={"ID":"6bd172c5-383f-4273-98a5-2c92223dc765","Type":"ContainerStarted","Data":"1dd1a3a407b095e68292977490d4973425a0b88c012b1d56585413eb14cc160f"} Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.656798 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86cb765474-5pq5z" event={"ID":"6bd172c5-383f-4273-98a5-2c92223dc765","Type":"ContainerStarted","Data":"da3c6322756657010fcf36cc524d53cd418cee529c0e474ed330babf44343e04"} Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.656808 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86cb765474-5pq5z" event={"ID":"6bd172c5-383f-4273-98a5-2c92223dc765","Type":"ContainerStarted","Data":"c60e87485b8c433cd0ae87ab204a606d44fb181b69fc47c512c8260657aee02f"} Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.656934 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.657633 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.688277 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.121743837 podStartE2EDuration="53.688259966s" podCreationTimestamp="2026-03-12 13:32:20 +0000 UTC" firstStartedPulling="2026-03-12 13:32:21.663396629 +0000 UTC m=+1360.112092025" lastFinishedPulling="2026-03-12 13:33:12.229912758 +0000 UTC m=+1410.678608154" observedRunningTime="2026-03-12 13:33:13.677217832 +0000 UTC m=+1412.125913228" watchObservedRunningTime="2026-03-12 13:33:13.688259966 +0000 UTC m=+1412.136955362" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.697871 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7dcf9787-ngc87" podStartSLOduration=3.265820768 podStartE2EDuration="9.697852718s" podCreationTimestamp="2026-03-12 13:33:04 +0000 UTC" firstStartedPulling="2026-03-12 13:33:05.74083537 +0000 UTC m=+1404.189530766" lastFinishedPulling="2026-03-12 13:33:12.17286732 +0000 UTC m=+1410.621562716" observedRunningTime="2026-03-12 13:33:13.69581368 +0000 UTC m=+1412.144509086" watchObservedRunningTime="2026-03-12 13:33:13.697852718 +0000 UTC m=+1412.146548124" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.734291 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-86cb765474-5pq5z" podStartSLOduration=6.734272561 podStartE2EDuration="6.734272561s" podCreationTimestamp="2026-03-12 13:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:13.728093246 +0000 UTC m=+1412.176788652" watchObservedRunningTime="2026-03-12 13:33:13.734272561 +0000 UTC m=+1412.182967957" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.940385 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:13 crc kubenswrapper[4778]: E0312 13:33:13.941096 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb110a1e-6281-437d-b857-eb79c4953e1a" containerName="cinder-db-sync" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.941109 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb110a1e-6281-437d-b857-eb79c4953e1a" containerName="cinder-db-sync" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.941283 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb110a1e-6281-437d-b857-eb79c4953e1a" containerName="cinder-db-sync" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.942159 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.945109 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5pxn8" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.945423 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.945455 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.946003 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 12 13:33:13 crc kubenswrapper[4778]: I0312 13:33:13.959530 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.071304 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.071444 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.071480 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.071524 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.071650 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.073673 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9bc\" (UniqueName: \"kubernetes.io/projected/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-kube-api-access-6r9bc\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.124667 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pckv7"] Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.124740 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-246x7"] Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.126407 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.177104 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.183262 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.183394 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.183452 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.183618 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r9bc\" (UniqueName: \"kubernetes.io/projected/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-kube-api-access-6r9bc\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.183748 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.183884 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.188261 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.188496 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.194589 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.194904 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.197558 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-246x7"] Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.209785 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r9bc\" (UniqueName: \"kubernetes.io/projected/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-kube-api-access-6r9bc\") pod \"cinder-scheduler-0\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.212648 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.214593 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.216715 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.247859 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293166 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293296 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62md\" (UniqueName: \"kubernetes.io/projected/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-kube-api-access-m62md\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293338 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293374 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-scripts\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293408 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a2a8e63-4f63-475d-a03a-3f094c697595-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293535 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293608 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8dzr\" (UniqueName: \"kubernetes.io/projected/3a2a8e63-4f63-475d-a03a-3f094c697595-kube-api-access-g8dzr\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293674 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293710 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293800 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-config\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293910 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.293940 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.294020 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a2a8e63-4f63-475d-a03a-3f094c697595-logs\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.299697 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395228 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a2a8e63-4f63-475d-a03a-3f094c697595-logs\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395287 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395327 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m62md\" (UniqueName: \"kubernetes.io/projected/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-kube-api-access-m62md\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395349 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395364 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-scripts\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395394 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a2a8e63-4f63-475d-a03a-3f094c697595-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395413 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395433 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8dzr\" (UniqueName: \"kubernetes.io/projected/3a2a8e63-4f63-475d-a03a-3f094c697595-kube-api-access-g8dzr\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395455 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395473 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395504 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-config\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395537 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395556 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.395638 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a2a8e63-4f63-475d-a03a-3f094c697595-logs\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.397318 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a2a8e63-4f63-475d-a03a-3f094c697595-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.399883 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.400861 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.406666 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.410223 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-config\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.411107 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.412364 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.413838 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.420967 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data-custom\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.421451 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-scripts\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.423833 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m62md\" (UniqueName: \"kubernetes.io/projected/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-kube-api-access-m62md\") pod \"dnsmasq-dns-5c9776ccc5-246x7\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.428765 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8dzr\" (UniqueName: \"kubernetes.io/projected/3a2a8e63-4f63-475d-a03a-3f094c697595-kube-api-access-g8dzr\") pod \"cinder-api-0\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.480339 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.585996 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.686832 4778 generic.go:334] "Generic (PLEG): container finished" podID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerID="53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce" exitCode=0 Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.686856 4778 generic.go:334] "Generic (PLEG): container finished" podID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerID="78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70" exitCode=2 Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.686863 4778 generic.go:334] "Generic (PLEG): container finished" podID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerID="a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8" exitCode=0 Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.686987 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerDied","Data":"53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce"} Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.687022 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerDied","Data":"78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70"} Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.687034 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerDied","Data":"a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8"} Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.687826 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" podUID="0c667b0e-f02d-4e71-959f-5d24b702bd73" containerName="dnsmasq-dns" containerID="cri-o://12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e" gracePeriod=10 Mar 12 13:33:14 crc kubenswrapper[4778]: I0312 13:33:14.688125 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.050773 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-246x7"] Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.071586 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.230272 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:15 crc kubenswrapper[4778]: W0312 13:33:15.268553 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a2a8e63_4f63_475d_a03a_3f094c697595.slice/crio-70300bcacb58d72f86e4ec213d3c35fe83a0cd2426b113dee9b8a89577ab08fd WatchSource:0}: Error finding container 70300bcacb58d72f86e4ec213d3c35fe83a0cd2426b113dee9b8a89577ab08fd: Status 404 returned error can't find the container with id 70300bcacb58d72f86e4ec213d3c35fe83a0cd2426b113dee9b8a89577ab08fd Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.452988 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.514635 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-nb\") pod \"0c667b0e-f02d-4e71-959f-5d24b702bd73\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.514760 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-svc\") pod \"0c667b0e-f02d-4e71-959f-5d24b702bd73\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.514800 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-sb\") pod \"0c667b0e-f02d-4e71-959f-5d24b702bd73\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.514844 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8tvn\" (UniqueName: \"kubernetes.io/projected/0c667b0e-f02d-4e71-959f-5d24b702bd73-kube-api-access-j8tvn\") pod \"0c667b0e-f02d-4e71-959f-5d24b702bd73\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.514902 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-config\") pod \"0c667b0e-f02d-4e71-959f-5d24b702bd73\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.514970 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-swift-storage-0\") pod \"0c667b0e-f02d-4e71-959f-5d24b702bd73\" (UID: \"0c667b0e-f02d-4e71-959f-5d24b702bd73\") " Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.532784 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c667b0e-f02d-4e71-959f-5d24b702bd73-kube-api-access-j8tvn" (OuterVolumeSpecName: "kube-api-access-j8tvn") pod "0c667b0e-f02d-4e71-959f-5d24b702bd73" (UID: "0c667b0e-f02d-4e71-959f-5d24b702bd73"). InnerVolumeSpecName "kube-api-access-j8tvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.600400 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c667b0e-f02d-4e71-959f-5d24b702bd73" (UID: "0c667b0e-f02d-4e71-959f-5d24b702bd73"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.607767 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c667b0e-f02d-4e71-959f-5d24b702bd73" (UID: "0c667b0e-f02d-4e71-959f-5d24b702bd73"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.607855 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c667b0e-f02d-4e71-959f-5d24b702bd73" (UID: "0c667b0e-f02d-4e71-959f-5d24b702bd73"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.611882 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c667b0e-f02d-4e71-959f-5d24b702bd73" (UID: "0c667b0e-f02d-4e71-959f-5d24b702bd73"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.616694 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.616733 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.616745 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.616757 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8tvn\" (UniqueName: \"kubernetes.io/projected/0c667b0e-f02d-4e71-959f-5d24b702bd73-kube-api-access-j8tvn\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.616771 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.620004 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-config" (OuterVolumeSpecName: "config") pod "0c667b0e-f02d-4e71-959f-5d24b702bd73" (UID: "0c667b0e-f02d-4e71-959f-5d24b702bd73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.704590 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1","Type":"ContainerStarted","Data":"64a915f04d0f3e7d3a300bf442920e4fae54d16b184cc83aeb6a0de63549b7fc"} Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.706138 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a2a8e63-4f63-475d-a03a-3f094c697595","Type":"ContainerStarted","Data":"70300bcacb58d72f86e4ec213d3c35fe83a0cd2426b113dee9b8a89577ab08fd"} Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.713606 4778 generic.go:334] "Generic (PLEG): container finished" podID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" containerID="3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31" exitCode=0 Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.713713 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" event={"ID":"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3","Type":"ContainerDied","Data":"3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31"} Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.713768 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" event={"ID":"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3","Type":"ContainerStarted","Data":"60f6f77084cfe6904eb9dc78f60c8b66e7fa89e1a236dd4007f1375a76319d3b"} Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.717789 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c667b0e-f02d-4e71-959f-5d24b702bd73-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.721716 4778 generic.go:334] "Generic (PLEG): container finished" podID="0c667b0e-f02d-4e71-959f-5d24b702bd73" containerID="12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e" exitCode=0 Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.721823 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.721823 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" event={"ID":"0c667b0e-f02d-4e71-959f-5d24b702bd73","Type":"ContainerDied","Data":"12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e"} Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.721878 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pckv7" event={"ID":"0c667b0e-f02d-4e71-959f-5d24b702bd73","Type":"ContainerDied","Data":"155ae76e9da40b17013784a015c926481ecd3a26d03501642e4e09f600be7598"} Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.721905 4778 scope.go:117] "RemoveContainer" containerID="12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.776702 4778 scope.go:117] "RemoveContainer" containerID="3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.781360 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pckv7"] Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.790667 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pckv7"] Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.882927 4778 scope.go:117] "RemoveContainer" containerID="12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e" Mar 12 13:33:15 crc kubenswrapper[4778]: E0312 13:33:15.883557 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e\": container with ID starting with 12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e not found: ID does not exist" containerID="12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.883610 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e"} err="failed to get container status \"12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e\": rpc error: code = NotFound desc = could not find container \"12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e\": container with ID starting with 12b6f20e5515eed5ea7b7c17096230648e78a0b77fb023b0f4f649bf2c61cb1e not found: ID does not exist" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.883643 4778 scope.go:117] "RemoveContainer" containerID="3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484" Mar 12 13:33:15 crc kubenswrapper[4778]: E0312 13:33:15.884258 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484\": container with ID starting with 3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484 not found: ID does not exist" containerID="3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484" Mar 12 13:33:15 crc kubenswrapper[4778]: I0312 13:33:15.884514 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484"} err="failed to get container status \"3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484\": rpc error: code = NotFound desc = could not find container \"3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484\": container with ID starting with 3d48ca4ea9f31c5d066a8775b412a93231788bd25815af466613f335e2f60484 not found: ID does not exist" Mar 12 13:33:16 crc kubenswrapper[4778]: I0312 13:33:16.073922 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:16 crc kubenswrapper[4778]: I0312 13:33:16.282318 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c667b0e-f02d-4e71-959f-5d24b702bd73" path="/var/lib/kubelet/pods/0c667b0e-f02d-4e71-959f-5d24b702bd73/volumes" Mar 12 13:33:16 crc kubenswrapper[4778]: I0312 13:33:16.540558 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:16 crc kubenswrapper[4778]: I0312 13:33:16.737729 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a2a8e63-4f63-475d-a03a-3f094c697595","Type":"ContainerStarted","Data":"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970"} Mar 12 13:33:16 crc kubenswrapper[4778]: I0312 13:33:16.740078 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" event={"ID":"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3","Type":"ContainerStarted","Data":"3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549"} Mar 12 13:33:16 crc kubenswrapper[4778]: I0312 13:33:16.740203 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:16 crc kubenswrapper[4778]: I0312 13:33:16.760913 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" podStartSLOduration=2.760894863 podStartE2EDuration="2.760894863s" podCreationTimestamp="2026-03-12 13:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:16.756014515 +0000 UTC m=+1415.204709921" watchObservedRunningTime="2026-03-12 13:33:16.760894863 +0000 UTC m=+1415.209590259" Mar 12 13:33:17 crc kubenswrapper[4778]: I0312 13:33:17.765390 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1","Type":"ContainerStarted","Data":"579bc12eaa8aab0c50eb9ede8c49b9d7ccb94f4d26f7a5f51955978076c57a52"} Mar 12 13:33:17 crc kubenswrapper[4778]: I0312 13:33:17.771760 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a2a8e63-4f63-475d-a03a-3f094c697595","Type":"ContainerStarted","Data":"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b"} Mar 12 13:33:17 crc kubenswrapper[4778]: I0312 13:33:17.772527 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerName="cinder-api-log" containerID="cri-o://9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970" gracePeriod=30 Mar 12 13:33:17 crc kubenswrapper[4778]: I0312 13:33:17.772916 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 12 13:33:17 crc kubenswrapper[4778]: I0312 13:33:17.773103 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerName="cinder-api" containerID="cri-o://b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b" gracePeriod=30 Mar 12 13:33:17 crc kubenswrapper[4778]: I0312 13:33:17.797558 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.797542599 podStartE2EDuration="3.797542599s" podCreationTimestamp="2026-03-12 13:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:17.791365414 +0000 UTC m=+1416.240060810" watchObservedRunningTime="2026-03-12 13:33:17.797542599 +0000 UTC m=+1416.246237995" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.445559 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.577439 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a2a8e63-4f63-475d-a03a-3f094c697595-etc-machine-id\") pod \"3a2a8e63-4f63-475d-a03a-3f094c697595\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.577561 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a2a8e63-4f63-475d-a03a-3f094c697595-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3a2a8e63-4f63-475d-a03a-3f094c697595" (UID: "3a2a8e63-4f63-475d-a03a-3f094c697595"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.577574 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a2a8e63-4f63-475d-a03a-3f094c697595-logs\") pod \"3a2a8e63-4f63-475d-a03a-3f094c697595\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.577645 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-combined-ca-bundle\") pod \"3a2a8e63-4f63-475d-a03a-3f094c697595\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.577737 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data\") pod \"3a2a8e63-4f63-475d-a03a-3f094c697595\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.577770 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data-custom\") pod \"3a2a8e63-4f63-475d-a03a-3f094c697595\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.577837 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-scripts\") pod \"3a2a8e63-4f63-475d-a03a-3f094c697595\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.577871 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8dzr\" (UniqueName: \"kubernetes.io/projected/3a2a8e63-4f63-475d-a03a-3f094c697595-kube-api-access-g8dzr\") pod \"3a2a8e63-4f63-475d-a03a-3f094c697595\" (UID: \"3a2a8e63-4f63-475d-a03a-3f094c697595\") " Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.578440 4778 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3a2a8e63-4f63-475d-a03a-3f094c697595-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.578492 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a2a8e63-4f63-475d-a03a-3f094c697595-logs" (OuterVolumeSpecName: "logs") pod "3a2a8e63-4f63-475d-a03a-3f094c697595" (UID: "3a2a8e63-4f63-475d-a03a-3f094c697595"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.595460 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2a8e63-4f63-475d-a03a-3f094c697595-kube-api-access-g8dzr" (OuterVolumeSpecName: "kube-api-access-g8dzr") pod "3a2a8e63-4f63-475d-a03a-3f094c697595" (UID: "3a2a8e63-4f63-475d-a03a-3f094c697595"). InnerVolumeSpecName "kube-api-access-g8dzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.601433 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a2a8e63-4f63-475d-a03a-3f094c697595" (UID: "3a2a8e63-4f63-475d-a03a-3f094c697595"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.602340 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-scripts" (OuterVolumeSpecName: "scripts") pod "3a2a8e63-4f63-475d-a03a-3f094c697595" (UID: "3a2a8e63-4f63-475d-a03a-3f094c697595"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.626365 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a2a8e63-4f63-475d-a03a-3f094c697595" (UID: "3a2a8e63-4f63-475d-a03a-3f094c697595"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.668335 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data" (OuterVolumeSpecName: "config-data") pod "3a2a8e63-4f63-475d-a03a-3f094c697595" (UID: "3a2a8e63-4f63-475d-a03a-3f094c697595"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.680434 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a2a8e63-4f63-475d-a03a-3f094c697595-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.680576 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.680655 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.680711 4778 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.680765 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a2a8e63-4f63-475d-a03a-3f094c697595-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.680842 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8dzr\" (UniqueName: \"kubernetes.io/projected/3a2a8e63-4f63-475d-a03a-3f094c697595-kube-api-access-g8dzr\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.780833 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1","Type":"ContainerStarted","Data":"6d003c8be41ca71c54434a7c7a1f3fbe12f00352aa1f46649d39fc04831f2c1f"} Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.785745 4778 generic.go:334] "Generic (PLEG): container finished" podID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerID="b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b" exitCode=0 Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.785772 4778 generic.go:334] "Generic (PLEG): container finished" podID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerID="9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970" exitCode=143 Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.785791 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a2a8e63-4f63-475d-a03a-3f094c697595","Type":"ContainerDied","Data":"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b"} Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.785816 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a2a8e63-4f63-475d-a03a-3f094c697595","Type":"ContainerDied","Data":"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970"} Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.785827 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3a2a8e63-4f63-475d-a03a-3f094c697595","Type":"ContainerDied","Data":"70300bcacb58d72f86e4ec213d3c35fe83a0cd2426b113dee9b8a89577ab08fd"} Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.785840 4778 scope.go:117] "RemoveContainer" containerID="b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.785962 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.812035 4778 scope.go:117] "RemoveContainer" containerID="9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.819197 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.927133035 podStartE2EDuration="5.819158858s" podCreationTimestamp="2026-03-12 13:33:13 +0000 UTC" firstStartedPulling="2026-03-12 13:33:15.05554944 +0000 UTC m=+1413.504244836" lastFinishedPulling="2026-03-12 13:33:15.947575263 +0000 UTC m=+1414.396270659" observedRunningTime="2026-03-12 13:33:18.818605323 +0000 UTC m=+1417.267300739" watchObservedRunningTime="2026-03-12 13:33:18.819158858 +0000 UTC m=+1417.267854254" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.842348 4778 scope.go:117] "RemoveContainer" containerID="b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b" Mar 12 13:33:18 crc kubenswrapper[4778]: E0312 13:33:18.845256 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b\": container with ID starting with b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b not found: ID does not exist" containerID="b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.845322 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b"} err="failed to get container status \"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b\": rpc error: code = NotFound desc = could not find container \"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b\": container with ID starting with b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b not found: ID does not exist" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.845354 4778 scope.go:117] "RemoveContainer" containerID="9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970" Mar 12 13:33:18 crc kubenswrapper[4778]: E0312 13:33:18.849645 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970\": container with ID starting with 9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970 not found: ID does not exist" containerID="9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.849694 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970"} err="failed to get container status \"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970\": rpc error: code = NotFound desc = could not find container \"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970\": container with ID starting with 9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970 not found: ID does not exist" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.849725 4778 scope.go:117] "RemoveContainer" containerID="b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.850456 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b"} err="failed to get container status \"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b\": rpc error: code = NotFound desc = could not find container \"b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b\": container with ID starting with b7f952975468ddb5173e7d7ed680242b58ea26be1e25587b9f3f0e647fb5dc2b not found: ID does not exist" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.850482 4778 scope.go:117] "RemoveContainer" containerID="9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.854304 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970"} err="failed to get container status \"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970\": rpc error: code = NotFound desc = could not find container \"9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970\": container with ID starting with 9bd2b44aa17d88e68154cd64ae134f6ebbf2fd2a7b21bc7b54275f7cfd102970 not found: ID does not exist" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.854348 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.864459 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.873250 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:18 crc kubenswrapper[4778]: E0312 13:33:18.873657 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c667b0e-f02d-4e71-959f-5d24b702bd73" containerName="dnsmasq-dns" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.873673 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c667b0e-f02d-4e71-959f-5d24b702bd73" containerName="dnsmasq-dns" Mar 12 13:33:18 crc kubenswrapper[4778]: E0312 13:33:18.873688 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerName="cinder-api-log" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.873694 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerName="cinder-api-log" Mar 12 13:33:18 crc kubenswrapper[4778]: E0312 13:33:18.873704 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c667b0e-f02d-4e71-959f-5d24b702bd73" containerName="init" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.873709 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c667b0e-f02d-4e71-959f-5d24b702bd73" containerName="init" Mar 12 13:33:18 crc kubenswrapper[4778]: E0312 13:33:18.873734 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerName="cinder-api" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.873739 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerName="cinder-api" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.873953 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c667b0e-f02d-4e71-959f-5d24b702bd73" containerName="dnsmasq-dns" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.873981 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerName="cinder-api" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.873996 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" containerName="cinder-api-log" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.874956 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.878651 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.878749 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.878808 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.878941 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986245 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986327 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f72014-50e8-4dd4-9764-1b2c7d546b30-logs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986351 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99f72014-50e8-4dd4-9764-1b2c7d546b30-etc-machine-id\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986402 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-public-tls-certs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986468 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-config-data\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986503 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986537 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-scripts\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986576 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-config-data-custom\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:18 crc kubenswrapper[4778]: I0312 13:33:18.986594 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkjf\" (UniqueName: \"kubernetes.io/projected/99f72014-50e8-4dd4-9764-1b2c7d546b30-kube-api-access-xdkjf\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090234 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-scripts\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090524 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-config-data-custom\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090547 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkjf\" (UniqueName: \"kubernetes.io/projected/99f72014-50e8-4dd4-9764-1b2c7d546b30-kube-api-access-xdkjf\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090579 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090609 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99f72014-50e8-4dd4-9764-1b2c7d546b30-etc-machine-id\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090622 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f72014-50e8-4dd4-9764-1b2c7d546b30-logs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090665 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-public-tls-certs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090703 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-config-data\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.090734 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.091039 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99f72014-50e8-4dd4-9764-1b2c7d546b30-etc-machine-id\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.093890 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99f72014-50e8-4dd4-9764-1b2c7d546b30-logs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.094018 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-config-data-custom\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.095081 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.095502 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-scripts\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.097341 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.100232 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-config-data\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.102685 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99f72014-50e8-4dd4-9764-1b2c7d546b30-public-tls-certs\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.111710 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkjf\" (UniqueName: \"kubernetes.io/projected/99f72014-50e8-4dd4-9764-1b2c7d546b30-kube-api-access-xdkjf\") pod \"cinder-api-0\" (UID: \"99f72014-50e8-4dd4-9764-1b2c7d546b30\") " pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.197811 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.300164 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.333107 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.395604 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-log-httpd\") pod \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.395717 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-run-httpd\") pod \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.395778 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-combined-ca-bundle\") pod \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.395831 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-scripts\") pod \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.395848 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-config-data\") pod \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.395884 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-sg-core-conf-yaml\") pod \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.395903 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62wfc\" (UniqueName: \"kubernetes.io/projected/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-kube-api-access-62wfc\") pod \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\" (UID: \"b4cb6d6d-bc05-4809-83a7-5aacda62cc10\") " Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.399934 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b4cb6d6d-bc05-4809-83a7-5aacda62cc10" (UID: "b4cb6d6d-bc05-4809-83a7-5aacda62cc10"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.400490 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b4cb6d6d-bc05-4809-83a7-5aacda62cc10" (UID: "b4cb6d6d-bc05-4809-83a7-5aacda62cc10"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.409245 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-kube-api-access-62wfc" (OuterVolumeSpecName: "kube-api-access-62wfc") pod "b4cb6d6d-bc05-4809-83a7-5aacda62cc10" (UID: "b4cb6d6d-bc05-4809-83a7-5aacda62cc10"). InnerVolumeSpecName "kube-api-access-62wfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.409335 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-scripts" (OuterVolumeSpecName: "scripts") pod "b4cb6d6d-bc05-4809-83a7-5aacda62cc10" (UID: "b4cb6d6d-bc05-4809-83a7-5aacda62cc10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.502797 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b4cb6d6d-bc05-4809-83a7-5aacda62cc10" (UID: "b4cb6d6d-bc05-4809-83a7-5aacda62cc10"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.509890 4778 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.509933 4778 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.509943 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.509952 4778 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.509963 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62wfc\" (UniqueName: \"kubernetes.io/projected/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-kube-api-access-62wfc\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.548529 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4cb6d6d-bc05-4809-83a7-5aacda62cc10" (UID: "b4cb6d6d-bc05-4809-83a7-5aacda62cc10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.564748 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-config-data" (OuterVolumeSpecName: "config-data") pod "b4cb6d6d-bc05-4809-83a7-5aacda62cc10" (UID: "b4cb6d6d-bc05-4809-83a7-5aacda62cc10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.611979 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.612043 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4cb6d6d-bc05-4809-83a7-5aacda62cc10-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.741304 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.800898 4778 generic.go:334] "Generic (PLEG): container finished" podID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerID="f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0" exitCode=0 Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.801117 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerDied","Data":"f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0"} Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.801145 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4cb6d6d-bc05-4809-83a7-5aacda62cc10","Type":"ContainerDied","Data":"a7e5a5f0fc47985a7306f104e3261cd746e20017382e7ac550b97742b3f6f6e4"} Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.801163 4778 scope.go:117] "RemoveContainer" containerID="53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.801349 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.816495 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"99f72014-50e8-4dd4-9764-1b2c7d546b30","Type":"ContainerStarted","Data":"ffc5edd8376b259d2e817c5933836529d349e1e5587b0e6cb096aa98c0eb7270"} Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.849608 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.857822 4778 scope.go:117] "RemoveContainer" containerID="78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.891554 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917263 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:19 crc kubenswrapper[4778]: E0312 13:33:19.917620 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="ceilometer-central-agent" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917636 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="ceilometer-central-agent" Mar 12 13:33:19 crc kubenswrapper[4778]: E0312 13:33:19.917653 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="proxy-httpd" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917659 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="proxy-httpd" Mar 12 13:33:19 crc kubenswrapper[4778]: E0312 13:33:19.917678 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="sg-core" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917684 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="sg-core" Mar 12 13:33:19 crc kubenswrapper[4778]: E0312 13:33:19.917705 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="ceilometer-notification-agent" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917711 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="ceilometer-notification-agent" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917857 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="ceilometer-notification-agent" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917874 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="sg-core" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917888 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="proxy-httpd" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.917895 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" containerName="ceilometer-central-agent" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.919729 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.924474 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.924680 4778 scope.go:117] "RemoveContainer" containerID="f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.931124 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.945225 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.974991 4778 scope.go:117] "RemoveContainer" containerID="a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.993282 4778 scope.go:117] "RemoveContainer" containerID="53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce" Mar 12 13:33:19 crc kubenswrapper[4778]: E0312 13:33:19.993752 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce\": container with ID starting with 53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce not found: ID does not exist" containerID="53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.993783 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce"} err="failed to get container status \"53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce\": rpc error: code = NotFound desc = could not find container \"53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce\": container with ID starting with 53f309e530cf4b086d2af500ee0b6839a2f6cb85c731eee64b12d5c2ca9527ce not found: ID does not exist" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.993841 4778 scope.go:117] "RemoveContainer" containerID="78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70" Mar 12 13:33:19 crc kubenswrapper[4778]: E0312 13:33:19.994326 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70\": container with ID starting with 78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70 not found: ID does not exist" containerID="78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.994347 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70"} err="failed to get container status \"78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70\": rpc error: code = NotFound desc = could not find container \"78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70\": container with ID starting with 78dbc9cf48d678718d746451597636002d39908c130023e24550012d03edab70 not found: ID does not exist" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.994360 4778 scope.go:117] "RemoveContainer" containerID="f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0" Mar 12 13:33:19 crc kubenswrapper[4778]: E0312 13:33:19.994572 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0\": container with ID starting with f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0 not found: ID does not exist" containerID="f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.994590 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0"} err="failed to get container status \"f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0\": rpc error: code = NotFound desc = could not find container \"f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0\": container with ID starting with f73331cf93a94b368140f81472e855149bad846c050d72495e3f1fdfaa6cf4d0 not found: ID does not exist" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.994604 4778 scope.go:117] "RemoveContainer" containerID="a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8" Mar 12 13:33:19 crc kubenswrapper[4778]: E0312 13:33:19.994791 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8\": container with ID starting with a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8 not found: ID does not exist" containerID="a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8" Mar 12 13:33:19 crc kubenswrapper[4778]: I0312 13:33:19.994810 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8"} err="failed to get container status \"a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8\": rpc error: code = NotFound desc = could not find container \"a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8\": container with ID starting with a2afa1efaa5e813d9e93bd765e7abf6c5129c2365e3e4d71622e5bbd682b89f8 not found: ID does not exist" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.017058 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.031124 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfq6z\" (UniqueName: \"kubernetes.io/projected/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-kube-api-access-rfq6z\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.031174 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.031221 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.031258 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-scripts\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.031277 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-log-httpd\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.031301 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-run-httpd\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.031332 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-config-data\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.132582 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfq6z\" (UniqueName: \"kubernetes.io/projected/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-kube-api-access-rfq6z\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.132638 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.132672 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.132712 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-scripts\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.132739 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-log-httpd\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.132775 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-run-httpd\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.132810 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-config-data\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.136679 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-run-httpd\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.137780 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-log-httpd\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.139913 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.139941 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.140798 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-config-data\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.141543 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-scripts\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.153400 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfq6z\" (UniqueName: \"kubernetes.io/projected/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-kube-api-access-rfq6z\") pod \"ceilometer-0\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.243552 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.267506 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2a8e63-4f63-475d-a03a-3f094c697595" path="/var/lib/kubelet/pods/3a2a8e63-4f63-475d-a03a-3f094c697595/volumes" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.268332 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cb6d6d-bc05-4809-83a7-5aacda62cc10" path="/var/lib/kubelet/pods/b4cb6d6d-bc05-4809-83a7-5aacda62cc10/volumes" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.334171 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86cb765474-5pq5z" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.414442 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f884f5564-dxzpv"] Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.414979 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f884f5564-dxzpv" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api-log" containerID="cri-o://555085059a0c8494fcbd31c46657e06bdebc21317a675fa20661619d5dc02586" gracePeriod=30 Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.417087 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f884f5564-dxzpv" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api" containerID="cri-o://c0edf91d21f7ba54f7ae8ead172101f785145fe82241acf1f7236f38396130a9" gracePeriod=30 Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.428964 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f884f5564-dxzpv" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.849604 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"99f72014-50e8-4dd4-9764-1b2c7d546b30","Type":"ContainerStarted","Data":"62fa57c95e565e21b55673b674363abc6eb8ad44ca2bbf998f2181e0eacf6026"} Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.853472 4778 generic.go:334] "Generic (PLEG): container finished" podID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerID="555085059a0c8494fcbd31c46657e06bdebc21317a675fa20661619d5dc02586" exitCode=143 Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.854307 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f884f5564-dxzpv" event={"ID":"ef2e3c21-ccc6-4dcc-a476-7393bb481441","Type":"ContainerDied","Data":"555085059a0c8494fcbd31c46657e06bdebc21317a675fa20661619d5dc02586"} Mar 12 13:33:20 crc kubenswrapper[4778]: I0312 13:33:20.902747 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:20 crc kubenswrapper[4778]: W0312 13:33:20.928877 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b2fa220_02b1_4940_9ae0_3d9e5b4bcd9c.slice/crio-824d71ac269215e859c8ef1b41498f4804c8adec49c2375a8307421f28798e4b WatchSource:0}: Error finding container 824d71ac269215e859c8ef1b41498f4804c8adec49c2375a8307421f28798e4b: Status 404 returned error can't find the container with id 824d71ac269215e859c8ef1b41498f4804c8adec49c2375a8307421f28798e4b Mar 12 13:33:21 crc kubenswrapper[4778]: I0312 13:33:21.867875 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"99f72014-50e8-4dd4-9764-1b2c7d546b30","Type":"ContainerStarted","Data":"384c98ecb5fea55607fd01b63ae71a434b6a533af87e4618a66f51cb22530b62"} Mar 12 13:33:21 crc kubenswrapper[4778]: I0312 13:33:21.868181 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 12 13:33:21 crc kubenswrapper[4778]: I0312 13:33:21.872739 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerStarted","Data":"824d71ac269215e859c8ef1b41498f4804c8adec49c2375a8307421f28798e4b"} Mar 12 13:33:22 crc kubenswrapper[4778]: I0312 13:33:22.299880 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.2998594709999995 podStartE2EDuration="4.299859471s" podCreationTimestamp="2026-03-12 13:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:21.896224591 +0000 UTC m=+1420.344919997" watchObservedRunningTime="2026-03-12 13:33:22.299859471 +0000 UTC m=+1420.748554867" Mar 12 13:33:22 crc kubenswrapper[4778]: I0312 13:33:22.857981 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:33:22 crc kubenswrapper[4778]: I0312 13:33:22.886634 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerStarted","Data":"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c"} Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.087661 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7955c84d65-qfgcn"] Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.088161 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7955c84d65-qfgcn" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-api" containerID="cri-o://938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd" gracePeriod=30 Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.088941 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7955c84d65-qfgcn" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-httpd" containerID="cri-o://71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09" gracePeriod=30 Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.114785 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7955c84d65-qfgcn" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": EOF" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.117411 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-769c65dfd5-t7d9g"] Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.118849 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.145076 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-769c65dfd5-t7d9g"] Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.207205 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-combined-ca-bundle\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.207285 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f76t\" (UniqueName: \"kubernetes.io/projected/e3118f8b-6bd2-4fba-8300-114513770916-kube-api-access-4f76t\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.207364 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-public-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.207415 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-httpd-config\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.207449 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-config\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.207684 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-internal-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.207848 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-ovndb-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.313284 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-ovndb-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.313350 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-combined-ca-bundle\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.313383 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f76t\" (UniqueName: \"kubernetes.io/projected/e3118f8b-6bd2-4fba-8300-114513770916-kube-api-access-4f76t\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.313420 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-public-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.313458 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-httpd-config\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.313484 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-config\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.313540 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-internal-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.321167 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-internal-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.325342 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-combined-ca-bundle\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.325985 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-httpd-config\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.326876 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-public-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.329386 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-config\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.339007 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-ovndb-tls-certs\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.349074 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f76t\" (UniqueName: \"kubernetes.io/projected/e3118f8b-6bd2-4fba-8300-114513770916-kube-api-access-4f76t\") pod \"neutron-769c65dfd5-t7d9g\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.461333 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.800639 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.805122 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.899225 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerStarted","Data":"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7"} Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.899265 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerStarted","Data":"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e"} Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.902006 4778 generic.go:334] "Generic (PLEG): container finished" podID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerID="71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09" exitCode=0 Mar 12 13:33:23 crc kubenswrapper[4778]: I0312 13:33:23.902097 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7955c84d65-qfgcn" event={"ID":"d582b80a-57bd-4cd4-9e72-8a963cae187d","Type":"ContainerDied","Data":"71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09"} Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.055930 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d4d765698-l7bjx"] Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.057449 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.068126 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d4d765698-l7bjx"] Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.138381 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-public-tls-certs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.138421 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-scripts\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.138461 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-config-data\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.138526 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxx2q\" (UniqueName: \"kubernetes.io/projected/267e7df2-d35c-45c4-af65-e8af31f8f6cf-kube-api-access-xxx2q\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.138568 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-internal-tls-certs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.138602 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-combined-ca-bundle\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.138712 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/267e7df2-d35c-45c4-af65-e8af31f8f6cf-logs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.198980 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-769c65dfd5-t7d9g"] Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.240270 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-public-tls-certs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.240332 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-scripts\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.240388 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-config-data\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.240442 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxx2q\" (UniqueName: \"kubernetes.io/projected/267e7df2-d35c-45c4-af65-e8af31f8f6cf-kube-api-access-xxx2q\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.240477 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-internal-tls-certs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.240506 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-combined-ca-bundle\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.240620 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/267e7df2-d35c-45c4-af65-e8af31f8f6cf-logs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.241053 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/267e7df2-d35c-45c4-af65-e8af31f8f6cf-logs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.245812 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-internal-tls-certs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.246176 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-combined-ca-bundle\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.247151 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-config-data\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.247701 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-public-tls-certs\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.251489 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/267e7df2-d35c-45c4-af65-e8af31f8f6cf-scripts\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.261554 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxx2q\" (UniqueName: \"kubernetes.io/projected/267e7df2-d35c-45c4-af65-e8af31f8f6cf-kube-api-access-xxx2q\") pod \"placement-d4d765698-l7bjx\" (UID: \"267e7df2-d35c-45c4-af65-e8af31f8f6cf\") " pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.482271 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.498799 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.557805 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-56bl9"] Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.558084 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" podUID="811bc15c-050c-4d37-a19f-095086748286" containerName="dnsmasq-dns" containerID="cri-o://512c2c0cf187f0ee46cccf1da3f29d083846818126627409ab7b1bb5fa1ef052" gracePeriod=10 Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.586170 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.644901 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.742719 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.861474 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-ovndb-tls-certs\") pod \"d582b80a-57bd-4cd4-9e72-8a963cae187d\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.861529 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-internal-tls-certs\") pod \"d582b80a-57bd-4cd4-9e72-8a963cae187d\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.861592 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-config\") pod \"d582b80a-57bd-4cd4-9e72-8a963cae187d\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.861650 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89hz\" (UniqueName: \"kubernetes.io/projected/d582b80a-57bd-4cd4-9e72-8a963cae187d-kube-api-access-h89hz\") pod \"d582b80a-57bd-4cd4-9e72-8a963cae187d\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.861748 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-combined-ca-bundle\") pod \"d582b80a-57bd-4cd4-9e72-8a963cae187d\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.861795 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-public-tls-certs\") pod \"d582b80a-57bd-4cd4-9e72-8a963cae187d\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.861831 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-httpd-config\") pod \"d582b80a-57bd-4cd4-9e72-8a963cae187d\" (UID: \"d582b80a-57bd-4cd4-9e72-8a963cae187d\") " Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.872513 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d582b80a-57bd-4cd4-9e72-8a963cae187d-kube-api-access-h89hz" (OuterVolumeSpecName: "kube-api-access-h89hz") pod "d582b80a-57bd-4cd4-9e72-8a963cae187d" (UID: "d582b80a-57bd-4cd4-9e72-8a963cae187d"). InnerVolumeSpecName "kube-api-access-h89hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.878570 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d582b80a-57bd-4cd4-9e72-8a963cae187d" (UID: "d582b80a-57bd-4cd4-9e72-8a963cae187d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.918948 4778 generic.go:334] "Generic (PLEG): container finished" podID="811bc15c-050c-4d37-a19f-095086748286" containerID="512c2c0cf187f0ee46cccf1da3f29d083846818126627409ab7b1bb5fa1ef052" exitCode=0 Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.919017 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" event={"ID":"811bc15c-050c-4d37-a19f-095086748286","Type":"ContainerDied","Data":"512c2c0cf187f0ee46cccf1da3f29d083846818126627409ab7b1bb5fa1ef052"} Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.932843 4778 generic.go:334] "Generic (PLEG): container finished" podID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerID="938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd" exitCode=0 Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.932923 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7955c84d65-qfgcn" event={"ID":"d582b80a-57bd-4cd4-9e72-8a963cae187d","Type":"ContainerDied","Data":"938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd"} Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.932957 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7955c84d65-qfgcn" event={"ID":"d582b80a-57bd-4cd4-9e72-8a963cae187d","Type":"ContainerDied","Data":"284482a4b85498fbfd683802fcf5305643f5a4cf33d63effbb2a1f2fd1071a11"} Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.932980 4778 scope.go:117] "RemoveContainer" containerID="71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.933137 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7955c84d65-qfgcn" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.946304 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d582b80a-57bd-4cd4-9e72-8a963cae187d" (UID: "d582b80a-57bd-4cd4-9e72-8a963cae187d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.947916 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerName="cinder-scheduler" containerID="cri-o://579bc12eaa8aab0c50eb9ede8c49b9d7ccb94f4d26f7a5f51955978076c57a52" gracePeriod=30 Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.949071 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-t7d9g" event={"ID":"e3118f8b-6bd2-4fba-8300-114513770916","Type":"ContainerStarted","Data":"6738f9dd946d748869f4b26f4030a90ea55b7a4599f29ac178ad859657a706f7"} Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.949105 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.949118 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-t7d9g" event={"ID":"e3118f8b-6bd2-4fba-8300-114513770916","Type":"ContainerStarted","Data":"7559ac32cffd7eca339ac8d8d2f5491100a0167d9ce788c2eb95e805cc071cda"} Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.949129 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-t7d9g" event={"ID":"e3118f8b-6bd2-4fba-8300-114513770916","Type":"ContainerStarted","Data":"8300c5c0870d3a0dc15fa6bca84b387efeba0222c0e9b918971777a65c2fcb29"} Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.949470 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerName="probe" containerID="cri-o://6d003c8be41ca71c54434a7c7a1f3fbe12f00352aa1f46649d39fc04831f2c1f" gracePeriod=30 Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.954871 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d582b80a-57bd-4cd4-9e72-8a963cae187d" (UID: "d582b80a-57bd-4cd4-9e72-8a963cae187d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.964537 4778 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.964568 4778 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.964576 4778 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.964585 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h89hz\" (UniqueName: \"kubernetes.io/projected/d582b80a-57bd-4cd4-9e72-8a963cae187d-kube-api-access-h89hz\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.976825 4778 scope.go:117] "RemoveContainer" containerID="938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd" Mar 12 13:33:24 crc kubenswrapper[4778]: I0312 13:33:24.986842 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d582b80a-57bd-4cd4-9e72-8a963cae187d" (UID: "d582b80a-57bd-4cd4-9e72-8a963cae187d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:24.999873 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-config" (OuterVolumeSpecName: "config") pod "d582b80a-57bd-4cd4-9e72-8a963cae187d" (UID: "d582b80a-57bd-4cd4-9e72-8a963cae187d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.000066 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-769c65dfd5-t7d9g" podStartSLOduration=2.000040923 podStartE2EDuration="2.000040923s" podCreationTimestamp="2026-03-12 13:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:24.983676089 +0000 UTC m=+1423.432371485" watchObservedRunningTime="2026-03-12 13:33:25.000040923 +0000 UTC m=+1423.448736339" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.010903 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d582b80a-57bd-4cd4-9e72-8a963cae187d" (UID: "d582b80a-57bd-4cd4-9e72-8a963cae187d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.017648 4778 scope.go:117] "RemoveContainer" containerID="71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09" Mar 12 13:33:25 crc kubenswrapper[4778]: E0312 13:33:25.027358 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09\": container with ID starting with 71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09 not found: ID does not exist" containerID="71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.027400 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09"} err="failed to get container status \"71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09\": rpc error: code = NotFound desc = could not find container \"71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09\": container with ID starting with 71d475b828218d4b5f04543cac9306418884b36e07b75eda675a3ad92ddced09 not found: ID does not exist" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.027423 4778 scope.go:117] "RemoveContainer" containerID="938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd" Mar 12 13:33:25 crc kubenswrapper[4778]: E0312 13:33:25.039294 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd\": container with ID starting with 938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd not found: ID does not exist" containerID="938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.039333 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd"} err="failed to get container status \"938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd\": rpc error: code = NotFound desc = could not find container \"938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd\": container with ID starting with 938c7e0b5c63a6fd5773476e5ae689de9d1155fb4dbd3f7bca4dc6764bc762cd not found: ID does not exist" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.066677 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.067322 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.067334 4778 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d582b80a-57bd-4cd4-9e72-8a963cae187d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.117304 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d4d765698-l7bjx"] Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.357711 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.376227 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7955c84d65-qfgcn"] Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.383336 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7955c84d65-qfgcn"] Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.478686 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsg69\" (UniqueName: \"kubernetes.io/projected/811bc15c-050c-4d37-a19f-095086748286-kube-api-access-bsg69\") pod \"811bc15c-050c-4d37-a19f-095086748286\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.478788 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-svc\") pod \"811bc15c-050c-4d37-a19f-095086748286\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.478820 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-sb\") pod \"811bc15c-050c-4d37-a19f-095086748286\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.478855 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-nb\") pod \"811bc15c-050c-4d37-a19f-095086748286\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.478875 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-swift-storage-0\") pod \"811bc15c-050c-4d37-a19f-095086748286\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.479008 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-config\") pod \"811bc15c-050c-4d37-a19f-095086748286\" (UID: \"811bc15c-050c-4d37-a19f-095086748286\") " Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.487736 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811bc15c-050c-4d37-a19f-095086748286-kube-api-access-bsg69" (OuterVolumeSpecName: "kube-api-access-bsg69") pod "811bc15c-050c-4d37-a19f-095086748286" (UID: "811bc15c-050c-4d37-a19f-095086748286"). InnerVolumeSpecName "kube-api-access-bsg69". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.514422 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f884f5564-dxzpv" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.514473 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f884f5564-dxzpv" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.541831 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "811bc15c-050c-4d37-a19f-095086748286" (UID: "811bc15c-050c-4d37-a19f-095086748286"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.552543 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "811bc15c-050c-4d37-a19f-095086748286" (UID: "811bc15c-050c-4d37-a19f-095086748286"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.554772 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-config" (OuterVolumeSpecName: "config") pod "811bc15c-050c-4d37-a19f-095086748286" (UID: "811bc15c-050c-4d37-a19f-095086748286"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.555300 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "811bc15c-050c-4d37-a19f-095086748286" (UID: "811bc15c-050c-4d37-a19f-095086748286"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.570043 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "811bc15c-050c-4d37-a19f-095086748286" (UID: "811bc15c-050c-4d37-a19f-095086748286"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.582432 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.582462 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.582472 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.582483 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsg69\" (UniqueName: \"kubernetes.io/projected/811bc15c-050c-4d37-a19f-095086748286-kube-api-access-bsg69\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.582498 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.582509 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/811bc15c-050c-4d37-a19f-095086748286-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.846659 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f884f5564-dxzpv" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:57122->10.217.0.162:9311: read: connection reset by peer" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.847019 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f884f5564-dxzpv" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:57136->10.217.0.162:9311: read: connection reset by peer" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.959866 4778 generic.go:334] "Generic (PLEG): container finished" podID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerID="c0edf91d21f7ba54f7ae8ead172101f785145fe82241acf1f7236f38396130a9" exitCode=0 Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.959909 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f884f5564-dxzpv" event={"ID":"ef2e3c21-ccc6-4dcc-a476-7393bb481441","Type":"ContainerDied","Data":"c0edf91d21f7ba54f7ae8ead172101f785145fe82241acf1f7236f38396130a9"} Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.962196 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" event={"ID":"811bc15c-050c-4d37-a19f-095086748286","Type":"ContainerDied","Data":"9e0eacf82432587cd58359c3985b8def0ae32125ba66b4e86532ed5c793bbd04"} Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.962314 4778 scope.go:117] "RemoveContainer" containerID="512c2c0cf187f0ee46cccf1da3f29d083846818126627409ab7b1bb5fa1ef052" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.962488 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.978312 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4d765698-l7bjx" event={"ID":"267e7df2-d35c-45c4-af65-e8af31f8f6cf","Type":"ContainerStarted","Data":"492cef66a73d9ef8d4c1ff75ec30e17e6b5471c575d66ec71eaee6ad116d9dba"} Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.978364 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.978378 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4d765698-l7bjx" event={"ID":"267e7df2-d35c-45c4-af65-e8af31f8f6cf","Type":"ContainerStarted","Data":"de153839f5b74d673c597a2017225c7a54792e9e45e4e6d0c0727c267094029f"} Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.978389 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4d765698-l7bjx" event={"ID":"267e7df2-d35c-45c4-af65-e8af31f8f6cf","Type":"ContainerStarted","Data":"e035032f2ad7923ebe4ad0e88eba6de6fc1c7f2ecded1d871ece8d6c5cf60deb"} Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.978413 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.985510 4778 generic.go:334] "Generic (PLEG): container finished" podID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerID="6d003c8be41ca71c54434a7c7a1f3fbe12f00352aa1f46649d39fc04831f2c1f" exitCode=0 Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.985598 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1","Type":"ContainerDied","Data":"6d003c8be41ca71c54434a7c7a1f3fbe12f00352aa1f46649d39fc04831f2c1f"} Mar 12 13:33:25 crc kubenswrapper[4778]: I0312 13:33:25.991285 4778 scope.go:117] "RemoveContainer" containerID="52a29e484c375a20ac3f8fc8c2aa037eb3038bed507119d164be5bd117815abc" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.008404 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d4d765698-l7bjx" podStartSLOduration=2.008383186 podStartE2EDuration="2.008383186s" podCreationTimestamp="2026-03-12 13:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:26.004579488 +0000 UTC m=+1424.453274904" watchObservedRunningTime="2026-03-12 13:33:26.008383186 +0000 UTC m=+1424.457078572" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.058921 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-56bl9"] Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.065822 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-56bl9"] Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.264892 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811bc15c-050c-4d37-a19f-095086748286" path="/var/lib/kubelet/pods/811bc15c-050c-4d37-a19f-095086748286/volumes" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.265669 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" path="/var/lib/kubelet/pods/d582b80a-57bd-4cd4-9e72-8a963cae187d/volumes" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.376496 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.502969 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data-custom\") pod \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.503398 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czsp7\" (UniqueName: \"kubernetes.io/projected/ef2e3c21-ccc6-4dcc-a476-7393bb481441-kube-api-access-czsp7\") pod \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.503492 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data\") pod \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.503525 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef2e3c21-ccc6-4dcc-a476-7393bb481441-logs\") pod \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.503568 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-combined-ca-bundle\") pod \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\" (UID: \"ef2e3c21-ccc6-4dcc-a476-7393bb481441\") " Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.504449 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef2e3c21-ccc6-4dcc-a476-7393bb481441-logs" (OuterVolumeSpecName: "logs") pod "ef2e3c21-ccc6-4dcc-a476-7393bb481441" (UID: "ef2e3c21-ccc6-4dcc-a476-7393bb481441"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.510943 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ef2e3c21-ccc6-4dcc-a476-7393bb481441" (UID: "ef2e3c21-ccc6-4dcc-a476-7393bb481441"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.510971 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef2e3c21-ccc6-4dcc-a476-7393bb481441-kube-api-access-czsp7" (OuterVolumeSpecName: "kube-api-access-czsp7") pod "ef2e3c21-ccc6-4dcc-a476-7393bb481441" (UID: "ef2e3c21-ccc6-4dcc-a476-7393bb481441"). InnerVolumeSpecName "kube-api-access-czsp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.528164 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef2e3c21-ccc6-4dcc-a476-7393bb481441" (UID: "ef2e3c21-ccc6-4dcc-a476-7393bb481441"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.552420 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data" (OuterVolumeSpecName: "config-data") pod "ef2e3c21-ccc6-4dcc-a476-7393bb481441" (UID: "ef2e3c21-ccc6-4dcc-a476-7393bb481441"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.605587 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czsp7\" (UniqueName: \"kubernetes.io/projected/ef2e3c21-ccc6-4dcc-a476-7393bb481441-kube-api-access-czsp7\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.605634 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.605648 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef2e3c21-ccc6-4dcc-a476-7393bb481441-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.605661 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.605674 4778 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef2e3c21-ccc6-4dcc-a476-7393bb481441-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.997000 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f884f5564-dxzpv" Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.996991 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f884f5564-dxzpv" event={"ID":"ef2e3c21-ccc6-4dcc-a476-7393bb481441","Type":"ContainerDied","Data":"91360286d4706715ddbf7b7dd1e71ab18f2b12552f2316ff72136087f9c79c95"} Mar 12 13:33:26 crc kubenswrapper[4778]: I0312 13:33:26.997226 4778 scope.go:117] "RemoveContainer" containerID="c0edf91d21f7ba54f7ae8ead172101f785145fe82241acf1f7236f38396130a9" Mar 12 13:33:27 crc kubenswrapper[4778]: I0312 13:33:27.001682 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerStarted","Data":"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84"} Mar 12 13:33:27 crc kubenswrapper[4778]: I0312 13:33:27.002082 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 13:33:27 crc kubenswrapper[4778]: I0312 13:33:27.036003 4778 scope.go:117] "RemoveContainer" containerID="555085059a0c8494fcbd31c46657e06bdebc21317a675fa20661619d5dc02586" Mar 12 13:33:27 crc kubenswrapper[4778]: I0312 13:33:27.037369 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.00158709 podStartE2EDuration="8.037342883s" podCreationTimestamp="2026-03-12 13:33:19 +0000 UTC" firstStartedPulling="2026-03-12 13:33:20.932628618 +0000 UTC m=+1419.381324014" lastFinishedPulling="2026-03-12 13:33:25.968384411 +0000 UTC m=+1424.417079807" observedRunningTime="2026-03-12 13:33:27.026844805 +0000 UTC m=+1425.475540201" watchObservedRunningTime="2026-03-12 13:33:27.037342883 +0000 UTC m=+1425.486038309" Mar 12 13:33:27 crc kubenswrapper[4778]: I0312 13:33:27.097966 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f884f5564-dxzpv"] Mar 12 13:33:27 crc kubenswrapper[4778]: I0312 13:33:27.106743 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5f884f5564-dxzpv"] Mar 12 13:33:28 crc kubenswrapper[4778]: I0312 13:33:28.268988 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" path="/var/lib/kubelet/pods/ef2e3c21-ccc6-4dcc-a476-7393bb481441/volumes" Mar 12 13:33:28 crc kubenswrapper[4778]: I0312 13:33:28.558003 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:33:28 crc kubenswrapper[4778]: I0312 13:33:28.558097 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:33:28 crc kubenswrapper[4778]: I0312 13:33:28.558217 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:33:28 crc kubenswrapper[4778]: I0312 13:33:28.559289 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"572aad6c3b1a3f7c9ef45b8b4feb0d367e7e7916d0ab8dd064e2b8ac87268c51"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:33:28 crc kubenswrapper[4778]: I0312 13:33:28.559410 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://572aad6c3b1a3f7c9ef45b8b4feb0d367e7e7916d0ab8dd064e2b8ac87268c51" gracePeriod=600 Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.024457 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="572aad6c3b1a3f7c9ef45b8b4feb0d367e7e7916d0ab8dd064e2b8ac87268c51" exitCode=0 Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.024549 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"572aad6c3b1a3f7c9ef45b8b4feb0d367e7e7916d0ab8dd064e2b8ac87268c51"} Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.024935 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e"} Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.024958 4778 scope.go:117] "RemoveContainer" containerID="3b4b372cac8f288fc2585670d5ab7c00c41331f173130d39b164aa74e4e3e398" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.026774 4778 generic.go:334] "Generic (PLEG): container finished" podID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerID="579bc12eaa8aab0c50eb9ede8c49b9d7ccb94f4d26f7a5f51955978076c57a52" exitCode=0 Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.026814 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1","Type":"ContainerDied","Data":"579bc12eaa8aab0c50eb9ede8c49b9d7ccb94f4d26f7a5f51955978076c57a52"} Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.263139 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.365256 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-etc-machine-id\") pod \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.365604 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data\") pod \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.365425 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" (UID: "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.365640 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data-custom\") pod \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.365802 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-combined-ca-bundle\") pod \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.365931 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9bc\" (UniqueName: \"kubernetes.io/projected/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-kube-api-access-6r9bc\") pod \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.365962 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-scripts\") pod \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\" (UID: \"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1\") " Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.366403 4778 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.371591 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" (UID: "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.371606 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-kube-api-access-6r9bc" (OuterVolumeSpecName: "kube-api-access-6r9bc") pod "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" (UID: "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1"). InnerVolumeSpecName "kube-api-access-6r9bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.371605 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-scripts" (OuterVolumeSpecName: "scripts") pod "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" (UID: "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.415916 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" (UID: "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.467834 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.468052 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r9bc\" (UniqueName: \"kubernetes.io/projected/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-kube-api-access-6r9bc\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.468144 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.468218 4778 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.475935 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data" (OuterVolumeSpecName: "config-data") pod "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" (UID: "a3d67881-ce3f-4abe-b07b-a0b22f1f53d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:29 crc kubenswrapper[4778]: I0312 13:33:29.569773 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.038292 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a3d67881-ce3f-4abe-b07b-a0b22f1f53d1","Type":"ContainerDied","Data":"64a915f04d0f3e7d3a300bf442920e4fae54d16b184cc83aeb6a0de63549b7fc"} Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.038347 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.038551 4778 scope.go:117] "RemoveContainer" containerID="6d003c8be41ca71c54434a7c7a1f3fbe12f00352aa1f46649d39fc04831f2c1f" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.068779 4778 scope.go:117] "RemoveContainer" containerID="579bc12eaa8aab0c50eb9ede8c49b9d7ccb94f4d26f7a5f51955978076c57a52" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.086400 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.094146 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.119648 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:30 crc kubenswrapper[4778]: E0312 13:33:30.120025 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-httpd" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120037 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-httpd" Mar 12 13:33:30 crc kubenswrapper[4778]: E0312 13:33:30.120054 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811bc15c-050c-4d37-a19f-095086748286" containerName="init" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120062 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="811bc15c-050c-4d37-a19f-095086748286" containerName="init" Mar 12 13:33:30 crc kubenswrapper[4778]: E0312 13:33:30.120068 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api-log" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120076 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api-log" Mar 12 13:33:30 crc kubenswrapper[4778]: E0312 13:33:30.120084 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerName="cinder-scheduler" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120090 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerName="cinder-scheduler" Mar 12 13:33:30 crc kubenswrapper[4778]: E0312 13:33:30.120101 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-api" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120107 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-api" Mar 12 13:33:30 crc kubenswrapper[4778]: E0312 13:33:30.120134 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811bc15c-050c-4d37-a19f-095086748286" containerName="dnsmasq-dns" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120139 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="811bc15c-050c-4d37-a19f-095086748286" containerName="dnsmasq-dns" Mar 12 13:33:30 crc kubenswrapper[4778]: E0312 13:33:30.120149 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120155 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api" Mar 12 13:33:30 crc kubenswrapper[4778]: E0312 13:33:30.120164 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerName="probe" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120170 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerName="probe" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120321 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-httpd" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120332 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api-log" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120342 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d582b80a-57bd-4cd4-9e72-8a963cae187d" containerName="neutron-api" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120351 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="811bc15c-050c-4d37-a19f-095086748286" containerName="dnsmasq-dns" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120368 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef2e3c21-ccc6-4dcc-a476-7393bb481441" containerName="barbican-api" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120378 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerName="cinder-scheduler" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.120385 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" containerName="probe" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.121229 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.125985 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.146835 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.268209 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d67881-ce3f-4abe-b07b-a0b22f1f53d1" path="/var/lib/kubelet/pods/a3d67881-ce3f-4abe-b07b-a0b22f1f53d1/volumes" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.284673 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39ee2404-53a8-4598-8c4b-c3a34fbf3480-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.284770 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-scripts\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.284795 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28r84\" (UniqueName: \"kubernetes.io/projected/39ee2404-53a8-4598-8c4b-c3a34fbf3480-kube-api-access-28r84\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.284821 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.285007 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-config-data\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.285054 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.306144 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-56bl9" podUID="811bc15c-050c-4d37-a19f-095086748286" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: i/o timeout" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.386732 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.386886 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39ee2404-53a8-4598-8c4b-c3a34fbf3480-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.386931 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-scripts\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.386948 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28r84\" (UniqueName: \"kubernetes.io/projected/39ee2404-53a8-4598-8c4b-c3a34fbf3480-kube-api-access-28r84\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.386967 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.387010 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-config-data\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.387549 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39ee2404-53a8-4598-8c4b-c3a34fbf3480-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.391950 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.392679 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-config-data\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.407139 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-scripts\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.408078 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ee2404-53a8-4598-8c4b-c3a34fbf3480-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.416638 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28r84\" (UniqueName: \"kubernetes.io/projected/39ee2404-53a8-4598-8c4b-c3a34fbf3480-kube-api-access-28r84\") pod \"cinder-scheduler-0\" (UID: \"39ee2404-53a8-4598-8c4b-c3a34fbf3480\") " pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.459943 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 12 13:33:30 crc kubenswrapper[4778]: I0312 13:33:30.950826 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 12 13:33:31 crc kubenswrapper[4778]: I0312 13:33:31.056028 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"39ee2404-53a8-4598-8c4b-c3a34fbf3480","Type":"ContainerStarted","Data":"9369f3c2bdb81c52320468f6a0a675db90bbc2790b389d857f47ec6c2986b217"} Mar 12 13:33:31 crc kubenswrapper[4778]: I0312 13:33:31.336583 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 12 13:33:31 crc kubenswrapper[4778]: I0312 13:33:31.463587 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-69b6dc4885-6lrlq" Mar 12 13:33:32 crc kubenswrapper[4778]: I0312 13:33:32.070947 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"39ee2404-53a8-4598-8c4b-c3a34fbf3480","Type":"ContainerStarted","Data":"5caf7b2dd6a6e32cbb2140761105fd053f1b58a4ef8c454f45f75eb7d2e8e8e9"} Mar 12 13:33:33 crc kubenswrapper[4778]: I0312 13:33:33.086011 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"39ee2404-53a8-4598-8c4b-c3a34fbf3480","Type":"ContainerStarted","Data":"bffbd49096f6cfd6157d754703da0c11466db663a3d4974d2922be039c3c1c55"} Mar 12 13:33:33 crc kubenswrapper[4778]: I0312 13:33:33.123573 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.123550524 podStartE2EDuration="3.123550524s" podCreationTimestamp="2026-03-12 13:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:33.119154529 +0000 UTC m=+1431.567849925" watchObservedRunningTime="2026-03-12 13:33:33.123550524 +0000 UTC m=+1431.572245930" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.882725 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-77f887c49f-fw2qd"] Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.884368 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.887304 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.891746 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.904556 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.914301 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77f887c49f-fw2qd"] Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.998070 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd76cb8-462f-4e60-b755-ef3170e70d11-log-httpd\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.998323 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-public-tls-certs\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.998421 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd76cb8-462f-4e60-b755-ef3170e70d11-run-httpd\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.998494 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-config-data\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.998552 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-combined-ca-bundle\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.998582 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bbd76cb8-462f-4e60-b755-ef3170e70d11-etc-swift\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.998623 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-internal-tls-certs\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:34 crc kubenswrapper[4778]: I0312 13:33:34.998656 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptf49\" (UniqueName: \"kubernetes.io/projected/bbd76cb8-462f-4e60-b755-ef3170e70d11-kube-api-access-ptf49\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.099981 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-combined-ca-bundle\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.100039 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bbd76cb8-462f-4e60-b755-ef3170e70d11-etc-swift\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.100091 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-internal-tls-certs\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.100122 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptf49\" (UniqueName: \"kubernetes.io/projected/bbd76cb8-462f-4e60-b755-ef3170e70d11-kube-api-access-ptf49\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.100157 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd76cb8-462f-4e60-b755-ef3170e70d11-log-httpd\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.100225 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-public-tls-certs\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.100268 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd76cb8-462f-4e60-b755-ef3170e70d11-run-httpd\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.100328 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-config-data\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.100887 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd76cb8-462f-4e60-b755-ef3170e70d11-run-httpd\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.101081 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbd76cb8-462f-4e60-b755-ef3170e70d11-log-httpd\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.106127 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-combined-ca-bundle\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.106534 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-config-data\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.107203 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-internal-tls-certs\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.108479 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbd76cb8-462f-4e60-b755-ef3170e70d11-public-tls-certs\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.109805 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bbd76cb8-462f-4e60-b755-ef3170e70d11-etc-swift\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.127041 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptf49\" (UniqueName: \"kubernetes.io/projected/bbd76cb8-462f-4e60-b755-ef3170e70d11-kube-api-access-ptf49\") pod \"swift-proxy-77f887c49f-fw2qd\" (UID: \"bbd76cb8-462f-4e60-b755-ef3170e70d11\") " pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.202048 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.460298 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 12 13:33:35 crc kubenswrapper[4778]: I0312 13:33:35.765745 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77f887c49f-fw2qd"] Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.112573 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f887c49f-fw2qd" event={"ID":"bbd76cb8-462f-4e60-b755-ef3170e70d11","Type":"ContainerStarted","Data":"054545cc67ddd38768251482093252364fbb2d00a10b98246b4c1ad92100dd4b"} Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.113967 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f887c49f-fw2qd" event={"ID":"bbd76cb8-462f-4e60-b755-ef3170e70d11","Type":"ContainerStarted","Data":"d69d23f632a141289fa25126779fdfc5f0cf08505643eae471d81045ee93350c"} Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.184666 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.186038 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.188040 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lmqm5" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.188276 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.192287 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.205237 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.321410 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/856cd6d1-db21-4503-94d7-cbf27ca96cc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.321848 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd5n6\" (UniqueName: \"kubernetes.io/projected/856cd6d1-db21-4503-94d7-cbf27ca96cc2-kube-api-access-jd5n6\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.322007 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856cd6d1-db21-4503-94d7-cbf27ca96cc2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.322315 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/856cd6d1-db21-4503-94d7-cbf27ca96cc2-openstack-config\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.424213 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856cd6d1-db21-4503-94d7-cbf27ca96cc2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.424301 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/856cd6d1-db21-4503-94d7-cbf27ca96cc2-openstack-config\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.425379 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/856cd6d1-db21-4503-94d7-cbf27ca96cc2-openstack-config\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.425461 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/856cd6d1-db21-4503-94d7-cbf27ca96cc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.425889 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd5n6\" (UniqueName: \"kubernetes.io/projected/856cd6d1-db21-4503-94d7-cbf27ca96cc2-kube-api-access-jd5n6\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.429634 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/856cd6d1-db21-4503-94d7-cbf27ca96cc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.435846 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856cd6d1-db21-4503-94d7-cbf27ca96cc2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.458846 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd5n6\" (UniqueName: \"kubernetes.io/projected/856cd6d1-db21-4503-94d7-cbf27ca96cc2-kube-api-access-jd5n6\") pod \"openstackclient\" (UID: \"856cd6d1-db21-4503-94d7-cbf27ca96cc2\") " pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.507638 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.926591 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.927822 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="ceilometer-central-agent" containerID="cri-o://92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c" gracePeriod=30 Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.927950 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="proxy-httpd" containerID="cri-o://cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84" gracePeriod=30 Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.927994 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="sg-core" containerID="cri-o://e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7" gracePeriod=30 Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.928029 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="ceilometer-notification-agent" containerID="cri-o://a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e" gracePeriod=30 Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.934623 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 13:33:36 crc kubenswrapper[4778]: I0312 13:33:36.993210 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.002097 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.122721 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"856cd6d1-db21-4503-94d7-cbf27ca96cc2","Type":"ContainerStarted","Data":"5fee156d13d3b3eafc7b482a7d9e15c3c8309e71dc17da054fde272c1e944ca3"} Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.124849 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f887c49f-fw2qd" event={"ID":"bbd76cb8-462f-4e60-b755-ef3170e70d11","Type":"ContainerStarted","Data":"af03b29d9375025c0539ee0addb00329a847129df3c98ac51efce5d92af6fdd7"} Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.124915 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.124930 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.127741 4778 generic.go:334] "Generic (PLEG): container finished" podID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerID="cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84" exitCode=0 Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.127770 4778 generic.go:334] "Generic (PLEG): container finished" podID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerID="e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7" exitCode=2 Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.127790 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerDied","Data":"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84"} Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.127813 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerDied","Data":"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7"} Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.147916 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-77f887c49f-fw2qd" podStartSLOduration=3.147895426 podStartE2EDuration="3.147895426s" podCreationTimestamp="2026-03-12 13:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:37.143756359 +0000 UTC m=+1435.592451765" watchObservedRunningTime="2026-03-12 13:33:37.147895426 +0000 UTC m=+1435.596590822" Mar 12 13:33:37 crc kubenswrapper[4778]: I0312 13:33:37.925587 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.057404 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-log-httpd\") pod \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.057474 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-sg-core-conf-yaml\") pod \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.057652 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-scripts\") pod \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.057701 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-run-httpd\") pod \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.058153 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" (UID: "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.058219 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-config-data\") pod \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.058216 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" (UID: "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.058314 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfq6z\" (UniqueName: \"kubernetes.io/projected/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-kube-api-access-rfq6z\") pod \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.058731 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-combined-ca-bundle\") pod \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\" (UID: \"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c\") " Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.059396 4778 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.059415 4778 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.067309 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-scripts" (OuterVolumeSpecName: "scripts") pod "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" (UID: "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.067406 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-kube-api-access-rfq6z" (OuterVolumeSpecName: "kube-api-access-rfq6z") pod "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" (UID: "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c"). InnerVolumeSpecName "kube-api-access-rfq6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.091278 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" (UID: "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.159261 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" (UID: "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.160968 4778 generic.go:334] "Generic (PLEG): container finished" podID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerID="a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e" exitCode=0 Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.161006 4778 generic.go:334] "Generic (PLEG): container finished" podID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerID="92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c" exitCode=0 Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.161143 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerDied","Data":"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e"} Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.161267 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerDied","Data":"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c"} Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.161353 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c","Type":"ContainerDied","Data":"824d71ac269215e859c8ef1b41498f4804c8adec49c2375a8307421f28798e4b"} Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.161331 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.161317 4778 scope.go:117] "RemoveContainer" containerID="cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.162656 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.162683 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfq6z\" (UniqueName: \"kubernetes.io/projected/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-kube-api-access-rfq6z\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.162698 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.162711 4778 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.236172 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-config-data" (OuterVolumeSpecName: "config-data") pod "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" (UID: "7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.264328 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.323833 4778 scope.go:117] "RemoveContainer" containerID="e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.344576 4778 scope.go:117] "RemoveContainer" containerID="a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.366799 4778 scope.go:117] "RemoveContainer" containerID="92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.385038 4778 scope.go:117] "RemoveContainer" containerID="cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84" Mar 12 13:33:38 crc kubenswrapper[4778]: E0312 13:33:38.385697 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84\": container with ID starting with cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84 not found: ID does not exist" containerID="cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.385790 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84"} err="failed to get container status \"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84\": rpc error: code = NotFound desc = could not find container \"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84\": container with ID starting with cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84 not found: ID does not exist" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.385869 4778 scope.go:117] "RemoveContainer" containerID="e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7" Mar 12 13:33:38 crc kubenswrapper[4778]: E0312 13:33:38.386224 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7\": container with ID starting with e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7 not found: ID does not exist" containerID="e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.386277 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7"} err="failed to get container status \"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7\": rpc error: code = NotFound desc = could not find container \"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7\": container with ID starting with e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7 not found: ID does not exist" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.386311 4778 scope.go:117] "RemoveContainer" containerID="a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e" Mar 12 13:33:38 crc kubenswrapper[4778]: E0312 13:33:38.386645 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e\": container with ID starting with a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e not found: ID does not exist" containerID="a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.386729 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e"} err="failed to get container status \"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e\": rpc error: code = NotFound desc = could not find container \"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e\": container with ID starting with a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e not found: ID does not exist" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.386805 4778 scope.go:117] "RemoveContainer" containerID="92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c" Mar 12 13:33:38 crc kubenswrapper[4778]: E0312 13:33:38.387122 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c\": container with ID starting with 92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c not found: ID does not exist" containerID="92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.387165 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c"} err="failed to get container status \"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c\": rpc error: code = NotFound desc = could not find container \"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c\": container with ID starting with 92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c not found: ID does not exist" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.387208 4778 scope.go:117] "RemoveContainer" containerID="cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.387509 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84"} err="failed to get container status \"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84\": rpc error: code = NotFound desc = could not find container \"cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84\": container with ID starting with cda3d15fbae3fde616e8dc2f2ce28f309b9b98d6a94b912a86966cec07509e84 not found: ID does not exist" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.387531 4778 scope.go:117] "RemoveContainer" containerID="e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.387870 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7"} err="failed to get container status \"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7\": rpc error: code = NotFound desc = could not find container \"e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7\": container with ID starting with e22d50b8fe9f90a6aab9adf00774ed799ec453df3b0b299a334bf282330ef1b7 not found: ID does not exist" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.387900 4778 scope.go:117] "RemoveContainer" containerID="a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.388206 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e"} err="failed to get container status \"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e\": rpc error: code = NotFound desc = could not find container \"a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e\": container with ID starting with a1d99c14b9faebc510bf1668c9726ad77d4df3734c20a3bc0e28ff53683f982e not found: ID does not exist" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.388230 4778 scope.go:117] "RemoveContainer" containerID="92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.388540 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c"} err="failed to get container status \"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c\": rpc error: code = NotFound desc = could not find container \"92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c\": container with ID starting with 92af10636577795c46a7d7213efc323d787b43d9aee552320b8e29e2d94b148c not found: ID does not exist" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.491348 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.498923 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.515525 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:38 crc kubenswrapper[4778]: E0312 13:33:38.515896 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="proxy-httpd" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.515912 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="proxy-httpd" Mar 12 13:33:38 crc kubenswrapper[4778]: E0312 13:33:38.515926 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="ceilometer-notification-agent" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.515933 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="ceilometer-notification-agent" Mar 12 13:33:38 crc kubenswrapper[4778]: E0312 13:33:38.515947 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="ceilometer-central-agent" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.515954 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="ceilometer-central-agent" Mar 12 13:33:38 crc kubenswrapper[4778]: E0312 13:33:38.515966 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="sg-core" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.515972 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="sg-core" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.516176 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="proxy-httpd" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.516209 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="sg-core" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.516220 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="ceilometer-central-agent" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.516237 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" containerName="ceilometer-notification-agent" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.517734 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.519633 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.519841 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.531048 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.670983 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-run-httpd\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.671130 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.671160 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.671435 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-config-data\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.671495 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-log-httpd\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.671615 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbsh\" (UniqueName: \"kubernetes.io/projected/76973a17-7173-486f-af83-14c0378fa581-kube-api-access-lvbsh\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.671658 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-scripts\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.773118 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbsh\" (UniqueName: \"kubernetes.io/projected/76973a17-7173-486f-af83-14c0378fa581-kube-api-access-lvbsh\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.773169 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-scripts\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.773247 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-run-httpd\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.773293 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.773316 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.773509 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-config-data\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.773558 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-log-httpd\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.774196 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-run-httpd\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.774429 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-log-httpd\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.778255 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.778398 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-config-data\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.779387 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.792370 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-scripts\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.800394 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbsh\" (UniqueName: \"kubernetes.io/projected/76973a17-7173-486f-af83-14c0378fa581-kube-api-access-lvbsh\") pod \"ceilometer-0\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " pod="openstack/ceilometer-0" Mar 12 13:33:38 crc kubenswrapper[4778]: I0312 13:33:38.848149 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:39 crc kubenswrapper[4778]: I0312 13:33:39.308252 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:39 crc kubenswrapper[4778]: W0312 13:33:39.319570 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76973a17_7173_486f_af83_14c0378fa581.slice/crio-0c674ab196e6e3ca8f4c50f5d0e86fbac274fa04d05f2445f7220ac78d3eb924 WatchSource:0}: Error finding container 0c674ab196e6e3ca8f4c50f5d0e86fbac274fa04d05f2445f7220ac78d3eb924: Status 404 returned error can't find the container with id 0c674ab196e6e3ca8f4c50f5d0e86fbac274fa04d05f2445f7220ac78d3eb924 Mar 12 13:33:40 crc kubenswrapper[4778]: I0312 13:33:40.192161 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerStarted","Data":"0c674ab196e6e3ca8f4c50f5d0e86fbac274fa04d05f2445f7220ac78d3eb924"} Mar 12 13:33:40 crc kubenswrapper[4778]: I0312 13:33:40.212448 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:40 crc kubenswrapper[4778]: I0312 13:33:40.277651 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c" path="/var/lib/kubelet/pods/7b2fa220-02b1-4940-9ae0-3d9e5b4bcd9c/volumes" Mar 12 13:33:40 crc kubenswrapper[4778]: I0312 13:33:40.677491 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 12 13:33:41 crc kubenswrapper[4778]: I0312 13:33:41.203073 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerStarted","Data":"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79"} Mar 12 13:33:41 crc kubenswrapper[4778]: I0312 13:33:41.203608 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerStarted","Data":"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a"} Mar 12 13:33:44 crc kubenswrapper[4778]: I0312 13:33:44.692733 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:45 crc kubenswrapper[4778]: I0312 13:33:45.214540 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77f887c49f-fw2qd" Mar 12 13:33:48 crc kubenswrapper[4778]: I0312 13:33:48.279895 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"856cd6d1-db21-4503-94d7-cbf27ca96cc2","Type":"ContainerStarted","Data":"483cf45dad2053f4851724913051cb9f082ec5ef16f52defcc123d6751b912b0"} Mar 12 13:33:48 crc kubenswrapper[4778]: I0312 13:33:48.293287 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerStarted","Data":"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1"} Mar 12 13:33:48 crc kubenswrapper[4778]: I0312 13:33:48.305099 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.439150789 podStartE2EDuration="12.305082559s" podCreationTimestamp="2026-03-12 13:33:36 +0000 UTC" firstStartedPulling="2026-03-12 13:33:37.001907866 +0000 UTC m=+1435.450603262" lastFinishedPulling="2026-03-12 13:33:47.867839626 +0000 UTC m=+1446.316535032" observedRunningTime="2026-03-12 13:33:48.298121972 +0000 UTC m=+1446.746817378" watchObservedRunningTime="2026-03-12 13:33:48.305082559 +0000 UTC m=+1446.753777955" Mar 12 13:33:49 crc kubenswrapper[4778]: I0312 13:33:49.916466 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:33:49 crc kubenswrapper[4778]: I0312 13:33:49.917418 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-httpd" containerID="cri-o://d321738b43c55df790b0a01418c177d18aaa7772e4cf7fca03bdeedb1c32e127" gracePeriod=30 Mar 12 13:33:49 crc kubenswrapper[4778]: I0312 13:33:49.922196 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-log" containerID="cri-o://cad2d2b9a9ac73ae35a814e1cadf9d57066e520b238036be878f7dfdb34aabb4" gracePeriod=30 Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.313742 4778 generic.go:334] "Generic (PLEG): container finished" podID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerID="cad2d2b9a9ac73ae35a814e1cadf9d57066e520b238036be878f7dfdb34aabb4" exitCode=143 Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.313816 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2b13038-d271-48f5-bd28-a38e2b9dff02","Type":"ContainerDied","Data":"cad2d2b9a9ac73ae35a814e1cadf9d57066e520b238036be878f7dfdb34aabb4"} Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.316860 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerStarted","Data":"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028"} Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.317018 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="ceilometer-central-agent" containerID="cri-o://a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a" gracePeriod=30 Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.317039 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.317062 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="sg-core" containerID="cri-o://9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1" gracePeriod=30 Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.317091 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="proxy-httpd" containerID="cri-o://1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028" gracePeriod=30 Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.317111 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="ceilometer-notification-agent" containerID="cri-o://7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79" gracePeriod=30 Mar 12 13:33:50 crc kubenswrapper[4778]: I0312 13:33:50.338638 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.983690607 podStartE2EDuration="12.338590782s" podCreationTimestamp="2026-03-12 13:33:38 +0000 UTC" firstStartedPulling="2026-03-12 13:33:39.322125761 +0000 UTC m=+1437.770821157" lastFinishedPulling="2026-03-12 13:33:49.677025936 +0000 UTC m=+1448.125721332" observedRunningTime="2026-03-12 13:33:50.335140664 +0000 UTC m=+1448.783836060" watchObservedRunningTime="2026-03-12 13:33:50.338590782 +0000 UTC m=+1448.787286188" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.252466 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.318274 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-combined-ca-bundle\") pod \"76973a17-7173-486f-af83-14c0378fa581\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.318331 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-sg-core-conf-yaml\") pod \"76973a17-7173-486f-af83-14c0378fa581\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.318403 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-scripts\") pod \"76973a17-7173-486f-af83-14c0378fa581\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.318434 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-config-data\") pod \"76973a17-7173-486f-af83-14c0378fa581\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.318469 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvbsh\" (UniqueName: \"kubernetes.io/projected/76973a17-7173-486f-af83-14c0378fa581-kube-api-access-lvbsh\") pod \"76973a17-7173-486f-af83-14c0378fa581\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.318528 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-log-httpd\") pod \"76973a17-7173-486f-af83-14c0378fa581\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.318580 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-run-httpd\") pod \"76973a17-7173-486f-af83-14c0378fa581\" (UID: \"76973a17-7173-486f-af83-14c0378fa581\") " Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.319083 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "76973a17-7173-486f-af83-14c0378fa581" (UID: "76973a17-7173-486f-af83-14c0378fa581"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.319102 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "76973a17-7173-486f-af83-14c0378fa581" (UID: "76973a17-7173-486f-af83-14c0378fa581"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.320482 4778 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.320890 4778 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76973a17-7173-486f-af83-14c0378fa581-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.326395 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76973a17-7173-486f-af83-14c0378fa581-kube-api-access-lvbsh" (OuterVolumeSpecName: "kube-api-access-lvbsh") pod "76973a17-7173-486f-af83-14c0378fa581" (UID: "76973a17-7173-486f-af83-14c0378fa581"). InnerVolumeSpecName "kube-api-access-lvbsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.326531 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-scripts" (OuterVolumeSpecName: "scripts") pod "76973a17-7173-486f-af83-14c0378fa581" (UID: "76973a17-7173-486f-af83-14c0378fa581"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.335868 4778 generic.go:334] "Generic (PLEG): container finished" podID="76973a17-7173-486f-af83-14c0378fa581" containerID="1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028" exitCode=0 Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.335901 4778 generic.go:334] "Generic (PLEG): container finished" podID="76973a17-7173-486f-af83-14c0378fa581" containerID="9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1" exitCode=2 Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.335911 4778 generic.go:334] "Generic (PLEG): container finished" podID="76973a17-7173-486f-af83-14c0378fa581" containerID="7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79" exitCode=0 Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.335938 4778 generic.go:334] "Generic (PLEG): container finished" podID="76973a17-7173-486f-af83-14c0378fa581" containerID="a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a" exitCode=0 Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.335960 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerDied","Data":"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028"} Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.335987 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerDied","Data":"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1"} Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.336016 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerDied","Data":"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79"} Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.336028 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerDied","Data":"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a"} Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.336036 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76973a17-7173-486f-af83-14c0378fa581","Type":"ContainerDied","Data":"0c674ab196e6e3ca8f4c50f5d0e86fbac274fa04d05f2445f7220ac78d3eb924"} Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.336053 4778 scope.go:117] "RemoveContainer" containerID="1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.336233 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.350307 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "76973a17-7173-486f-af83-14c0378fa581" (UID: "76973a17-7173-486f-af83-14c0378fa581"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.372613 4778 scope.go:117] "RemoveContainer" containerID="9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.392163 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76973a17-7173-486f-af83-14c0378fa581" (UID: "76973a17-7173-486f-af83-14c0378fa581"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.397427 4778 scope.go:117] "RemoveContainer" containerID="7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.422331 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.422372 4778 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.422387 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.422399 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvbsh\" (UniqueName: \"kubernetes.io/projected/76973a17-7173-486f-af83-14c0378fa581-kube-api-access-lvbsh\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.435723 4778 scope.go:117] "RemoveContainer" containerID="a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.440143 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-config-data" (OuterVolumeSpecName: "config-data") pod "76973a17-7173-486f-af83-14c0378fa581" (UID: "76973a17-7173-486f-af83-14c0378fa581"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.460636 4778 scope.go:117] "RemoveContainer" containerID="1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028" Mar 12 13:33:51 crc kubenswrapper[4778]: E0312 13:33:51.461206 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": container with ID starting with 1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028 not found: ID does not exist" containerID="1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.461273 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028"} err="failed to get container status \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": rpc error: code = NotFound desc = could not find container \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": container with ID starting with 1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.461307 4778 scope.go:117] "RemoveContainer" containerID="9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1" Mar 12 13:33:51 crc kubenswrapper[4778]: E0312 13:33:51.461816 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": container with ID starting with 9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1 not found: ID does not exist" containerID="9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.461848 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1"} err="failed to get container status \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": rpc error: code = NotFound desc = could not find container \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": container with ID starting with 9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.461865 4778 scope.go:117] "RemoveContainer" containerID="7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79" Mar 12 13:33:51 crc kubenswrapper[4778]: E0312 13:33:51.462254 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": container with ID starting with 7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79 not found: ID does not exist" containerID="7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.462313 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79"} err="failed to get container status \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": rpc error: code = NotFound desc = could not find container \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": container with ID starting with 7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.462350 4778 scope.go:117] "RemoveContainer" containerID="a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a" Mar 12 13:33:51 crc kubenswrapper[4778]: E0312 13:33:51.462633 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": container with ID starting with a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a not found: ID does not exist" containerID="a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.462664 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a"} err="failed to get container status \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": rpc error: code = NotFound desc = could not find container \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": container with ID starting with a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.462683 4778 scope.go:117] "RemoveContainer" containerID="1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.463294 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028"} err="failed to get container status \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": rpc error: code = NotFound desc = could not find container \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": container with ID starting with 1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.463326 4778 scope.go:117] "RemoveContainer" containerID="9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.463641 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1"} err="failed to get container status \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": rpc error: code = NotFound desc = could not find container \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": container with ID starting with 9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.463667 4778 scope.go:117] "RemoveContainer" containerID="7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.464060 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79"} err="failed to get container status \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": rpc error: code = NotFound desc = could not find container \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": container with ID starting with 7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.464082 4778 scope.go:117] "RemoveContainer" containerID="a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.464398 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a"} err="failed to get container status \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": rpc error: code = NotFound desc = could not find container \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": container with ID starting with a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.464424 4778 scope.go:117] "RemoveContainer" containerID="1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.464670 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028"} err="failed to get container status \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": rpc error: code = NotFound desc = could not find container \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": container with ID starting with 1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.464695 4778 scope.go:117] "RemoveContainer" containerID="9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.464963 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1"} err="failed to get container status \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": rpc error: code = NotFound desc = could not find container \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": container with ID starting with 9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.464988 4778 scope.go:117] "RemoveContainer" containerID="7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.465294 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79"} err="failed to get container status \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": rpc error: code = NotFound desc = could not find container \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": container with ID starting with 7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.465316 4778 scope.go:117] "RemoveContainer" containerID="a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.465553 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a"} err="failed to get container status \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": rpc error: code = NotFound desc = could not find container \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": container with ID starting with a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.465574 4778 scope.go:117] "RemoveContainer" containerID="1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.465815 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028"} err="failed to get container status \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": rpc error: code = NotFound desc = could not find container \"1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028\": container with ID starting with 1a8e0d7e1f3aab5c5624ca9847d187e08f63f5617d5b4835b74a5019bd409028 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.465837 4778 scope.go:117] "RemoveContainer" containerID="9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.466090 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1"} err="failed to get container status \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": rpc error: code = NotFound desc = could not find container \"9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1\": container with ID starting with 9c68d01c3251c2dcefdd1f4d92409f7a771f6097f592a70c23676aaa51a086c1 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.466113 4778 scope.go:117] "RemoveContainer" containerID="7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.466383 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79"} err="failed to get container status \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": rpc error: code = NotFound desc = could not find container \"7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79\": container with ID starting with 7f745be7f683445e4da01145bbe31250962c059abf58eb460e2f1333fb155a79 not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.466406 4778 scope.go:117] "RemoveContainer" containerID="a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.466698 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a"} err="failed to get container status \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": rpc error: code = NotFound desc = could not find container \"a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a\": container with ID starting with a5fd58e9033319df013906b8f30fe6475ac6a783b70a23d68bd819715773d59a not found: ID does not exist" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.524625 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76973a17-7173-486f-af83-14c0378fa581-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.676486 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.696423 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.729629 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:51 crc kubenswrapper[4778]: E0312 13:33:51.730040 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="ceilometer-central-agent" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.730062 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="ceilometer-central-agent" Mar 12 13:33:51 crc kubenswrapper[4778]: E0312 13:33:51.730089 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="ceilometer-notification-agent" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.730099 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="ceilometer-notification-agent" Mar 12 13:33:51 crc kubenswrapper[4778]: E0312 13:33:51.730115 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="proxy-httpd" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.730124 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="proxy-httpd" Mar 12 13:33:51 crc kubenswrapper[4778]: E0312 13:33:51.730155 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="sg-core" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.730163 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="sg-core" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.730394 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="sg-core" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.730411 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="ceilometer-central-agent" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.730431 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="proxy-httpd" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.730445 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="76973a17-7173-486f-af83-14c0378fa581" containerName="ceilometer-notification-agent" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.732372 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.746195 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.746643 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.746963 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.830248 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.830308 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.830503 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-run-httpd\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.830564 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-scripts\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.830762 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-config-data\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.830878 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-log-httpd\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.830978 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrpfb\" (UniqueName: \"kubernetes.io/projected/0bf8c182-c9d5-4011-b28c-c4f557a8071c-kube-api-access-nrpfb\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.933403 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrpfb\" (UniqueName: \"kubernetes.io/projected/0bf8c182-c9d5-4011-b28c-c4f557a8071c-kube-api-access-nrpfb\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.933841 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.933877 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.933937 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-run-httpd\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.933959 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-scripts\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.934015 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-config-data\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.934055 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-log-httpd\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.934636 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-log-httpd\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.934898 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-run-httpd\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.938042 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-scripts\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.938740 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-config-data\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.939988 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.941876 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:51 crc kubenswrapper[4778]: I0312 13:33:51.951415 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrpfb\" (UniqueName: \"kubernetes.io/projected/0bf8c182-c9d5-4011-b28c-c4f557a8071c-kube-api-access-nrpfb\") pod \"ceilometer-0\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " pod="openstack/ceilometer-0" Mar 12 13:33:52 crc kubenswrapper[4778]: I0312 13:33:52.088099 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:33:52 crc kubenswrapper[4778]: I0312 13:33:52.272297 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76973a17-7173-486f-af83-14c0378fa581" path="/var/lib/kubelet/pods/76973a17-7173-486f-af83-14c0378fa581/volumes" Mar 12 13:33:52 crc kubenswrapper[4778]: I0312 13:33:52.604304 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.067899 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": read tcp 10.217.0.2:38626->10.217.0.153:9292: read: connection reset by peer" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.067953 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9292/healthcheck\": read tcp 10.217.0.2:38636->10.217.0.153:9292: read: connection reset by peer" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.362170 4778 generic.go:334] "Generic (PLEG): container finished" podID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerID="d321738b43c55df790b0a01418c177d18aaa7772e4cf7fca03bdeedb1c32e127" exitCode=0 Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.362490 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2b13038-d271-48f5-bd28-a38e2b9dff02","Type":"ContainerDied","Data":"d321738b43c55df790b0a01418c177d18aaa7772e4cf7fca03bdeedb1c32e127"} Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.363567 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerStarted","Data":"0f3aa121caf2c1a6a7f5f32c4c791af4c518cf20357d26f2062f2e017c408468"} Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.477947 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.557679 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-678c76989b-8x56d"] Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.558046 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-678c76989b-8x56d" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" containerName="neutron-httpd" containerID="cri-o://7423051fcfb7c12e56b049e90be94c641f82520ceab5181c7fcca6713588c77f" gracePeriod=30 Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.558277 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-678c76989b-8x56d" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" containerName="neutron-api" containerID="cri-o://76d710be6da7b239e82f6228977b9799ccd95f2824b23913a0585897e926dd74" gracePeriod=30 Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.659535 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.781678 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-logs\") pod \"c2b13038-d271-48f5-bd28-a38e2b9dff02\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.781726 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-httpd-run\") pod \"c2b13038-d271-48f5-bd28-a38e2b9dff02\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.781779 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-combined-ca-bundle\") pod \"c2b13038-d271-48f5-bd28-a38e2b9dff02\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.781805 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfwzz\" (UniqueName: \"kubernetes.io/projected/c2b13038-d271-48f5-bd28-a38e2b9dff02-kube-api-access-xfwzz\") pod \"c2b13038-d271-48f5-bd28-a38e2b9dff02\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.781906 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-scripts\") pod \"c2b13038-d271-48f5-bd28-a38e2b9dff02\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.781966 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-config-data\") pod \"c2b13038-d271-48f5-bd28-a38e2b9dff02\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.781991 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-internal-tls-certs\") pod \"c2b13038-d271-48f5-bd28-a38e2b9dff02\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.782020 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"c2b13038-d271-48f5-bd28-a38e2b9dff02\" (UID: \"c2b13038-d271-48f5-bd28-a38e2b9dff02\") " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.783503 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c2b13038-d271-48f5-bd28-a38e2b9dff02" (UID: "c2b13038-d271-48f5-bd28-a38e2b9dff02"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.783930 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-logs" (OuterVolumeSpecName: "logs") pod "c2b13038-d271-48f5-bd28-a38e2b9dff02" (UID: "c2b13038-d271-48f5-bd28-a38e2b9dff02"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.789872 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.790107 4778 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2b13038-d271-48f5-bd28-a38e2b9dff02-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.795040 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b13038-d271-48f5-bd28-a38e2b9dff02-kube-api-access-xfwzz" (OuterVolumeSpecName: "kube-api-access-xfwzz") pod "c2b13038-d271-48f5-bd28-a38e2b9dff02" (UID: "c2b13038-d271-48f5-bd28-a38e2b9dff02"). InnerVolumeSpecName "kube-api-access-xfwzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.795148 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "c2b13038-d271-48f5-bd28-a38e2b9dff02" (UID: "c2b13038-d271-48f5-bd28-a38e2b9dff02"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.815347 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-scripts" (OuterVolumeSpecName: "scripts") pod "c2b13038-d271-48f5-bd28-a38e2b9dff02" (UID: "c2b13038-d271-48f5-bd28-a38e2b9dff02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.822946 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2b13038-d271-48f5-bd28-a38e2b9dff02" (UID: "c2b13038-d271-48f5-bd28-a38e2b9dff02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.864030 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c2b13038-d271-48f5-bd28-a38e2b9dff02" (UID: "c2b13038-d271-48f5-bd28-a38e2b9dff02"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.892804 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.892835 4778 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.892865 4778 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.892875 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.892885 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfwzz\" (UniqueName: \"kubernetes.io/projected/c2b13038-d271-48f5-bd28-a38e2b9dff02-kube-api-access-xfwzz\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.904025 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-config-data" (OuterVolumeSpecName: "config-data") pod "c2b13038-d271-48f5-bd28-a38e2b9dff02" (UID: "c2b13038-d271-48f5-bd28-a38e2b9dff02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.916340 4778 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.994698 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2b13038-d271-48f5-bd28-a38e2b9dff02-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:53 crc kubenswrapper[4778]: I0312 13:33:53.994729 4778 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.375917 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2b13038-d271-48f5-bd28-a38e2b9dff02","Type":"ContainerDied","Data":"15eaae81b5ec94e32bcb75db667617fbe51c32c5f0cac153a8a191ff89576b97"} Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.375962 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.377283 4778 scope.go:117] "RemoveContainer" containerID="d321738b43c55df790b0a01418c177d18aaa7772e4cf7fca03bdeedb1c32e127" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.378525 4778 generic.go:334] "Generic (PLEG): container finished" podID="e34be903-da25-4cdb-9298-2d53fdce0276" containerID="7423051fcfb7c12e56b049e90be94c641f82520ceab5181c7fcca6713588c77f" exitCode=0 Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.378629 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678c76989b-8x56d" event={"ID":"e34be903-da25-4cdb-9298-2d53fdce0276","Type":"ContainerDied","Data":"7423051fcfb7c12e56b049e90be94c641f82520ceab5181c7fcca6713588c77f"} Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.382080 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerStarted","Data":"8581110fc2e8206867368b8c4ae7af28cb79d5341dbf6b92ea91def7d2e28eb6"} Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.382124 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerStarted","Data":"e925c9c4c7aa08744211c517c124058aade623d45fd2e02df2777b4f2df794b2"} Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.406122 4778 scope.go:117] "RemoveContainer" containerID="cad2d2b9a9ac73ae35a814e1cadf9d57066e520b238036be878f7dfdb34aabb4" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.418964 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.438434 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.452510 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:33:54 crc kubenswrapper[4778]: E0312 13:33:54.453069 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-log" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.453138 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-log" Mar 12 13:33:54 crc kubenswrapper[4778]: E0312 13:33:54.453269 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-httpd" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.453324 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-httpd" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.453543 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-log" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.453618 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" containerName="glance-httpd" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.454574 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.457396 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.457967 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.479556 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.605364 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.605676 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.605811 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.605929 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.606026 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa757af-1c91-4b93-8916-5bbd99b8522e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.606129 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48c68\" (UniqueName: \"kubernetes.io/projected/7fa757af-1c91-4b93-8916-5bbd99b8522e-kube-api-access-48c68\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.606270 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa757af-1c91-4b93-8916-5bbd99b8522e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.606430 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.709379 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.709482 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.709539 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.709588 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.709613 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa757af-1c91-4b93-8916-5bbd99b8522e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.709655 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48c68\" (UniqueName: \"kubernetes.io/projected/7fa757af-1c91-4b93-8916-5bbd99b8522e-kube-api-access-48c68\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.709719 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa757af-1c91-4b93-8916-5bbd99b8522e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.709735 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.710113 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.710438 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fa757af-1c91-4b93-8916-5bbd99b8522e-logs\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.710553 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fa757af-1c91-4b93-8916-5bbd99b8522e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.717751 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.718904 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.720214 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.720432 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa757af-1c91-4b93-8916-5bbd99b8522e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.752938 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48c68\" (UniqueName: \"kubernetes.io/projected/7fa757af-1c91-4b93-8916-5bbd99b8522e-kube-api-access-48c68\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.757098 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"7fa757af-1c91-4b93-8916-5bbd99b8522e\") " pod="openstack/glance-default-internal-api-0" Mar 12 13:33:54 crc kubenswrapper[4778]: I0312 13:33:54.772085 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 12 13:33:55 crc kubenswrapper[4778]: I0312 13:33:55.392996 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerStarted","Data":"86ce4b4705307dac2aa00c6fc4314d927b33960ed30dc0799f79715a9adfcdf9"} Mar 12 13:33:55 crc kubenswrapper[4778]: I0312 13:33:55.471330 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 12 13:33:55 crc kubenswrapper[4778]: I0312 13:33:55.975982 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.268274 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b13038-d271-48f5-bd28-a38e2b9dff02" path="/var/lib/kubelet/pods/c2b13038-d271-48f5-bd28-a38e2b9dff02/volumes" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.334154 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d4d765698-l7bjx" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.421034 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa757af-1c91-4b93-8916-5bbd99b8522e","Type":"ContainerStarted","Data":"423ab013fd652a6661ad67d357330604f44bea50f0c07b9d7091f54614bcf3b5"} Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.421100 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa757af-1c91-4b93-8916-5bbd99b8522e","Type":"ContainerStarted","Data":"3e099ffe3a3d1c8b6878dca7cbc40faffead2664c2494336300caa2723a6629b"} Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.484403 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sckbb"] Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.486932 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sckbb" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.549423 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-79ccdbbbbd-gl27l"] Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.565796 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-79ccdbbbbd-gl27l" podUID="68092e68-04e5-4530-8d94-859789faeb94" containerName="placement-log" containerID="cri-o://be846a255557e511860dc7bc1b884d65bc6e48bfb1b98ae1316cb74617623c2b" gracePeriod=30 Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.566106 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-79ccdbbbbd-gl27l" podUID="68092e68-04e5-4530-8d94-859789faeb94" containerName="placement-api" containerID="cri-o://8cdda802eadd8c68b3ba4b5b69b6a0fd021902af043f1083daaae42e4e3ba4bc" gracePeriod=30 Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.609843 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sckbb"] Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.657904 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln84f\" (UniqueName: \"kubernetes.io/projected/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-kube-api-access-ln84f\") pod \"nova-api-db-create-sckbb\" (UID: \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\") " pod="openstack/nova-api-db-create-sckbb" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.657975 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-operator-scripts\") pod \"nova-api-db-create-sckbb\" (UID: \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\") " pod="openstack/nova-api-db-create-sckbb" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.704993 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-dcf9-account-create-update-2rmjd"] Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.706278 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.718751 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.736132 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dcf9-account-create-update-2rmjd"] Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.759996 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ab681f-51c2-4723-b5b6-58c841185455-operator-scripts\") pod \"nova-api-dcf9-account-create-update-2rmjd\" (UID: \"20ab681f-51c2-4723-b5b6-58c841185455\") " pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.760119 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-operator-scripts\") pod \"nova-api-db-create-sckbb\" (UID: \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\") " pod="openstack/nova-api-db-create-sckbb" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.760148 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln84f\" (UniqueName: \"kubernetes.io/projected/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-kube-api-access-ln84f\") pod \"nova-api-db-create-sckbb\" (UID: \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\") " pod="openstack/nova-api-db-create-sckbb" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.760177 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxrm\" (UniqueName: \"kubernetes.io/projected/20ab681f-51c2-4723-b5b6-58c841185455-kube-api-access-hvxrm\") pod \"nova-api-dcf9-account-create-update-2rmjd\" (UID: \"20ab681f-51c2-4723-b5b6-58c841185455\") " pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.761157 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-operator-scripts\") pod \"nova-api-db-create-sckbb\" (UID: \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\") " pod="openstack/nova-api-db-create-sckbb" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.823796 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln84f\" (UniqueName: \"kubernetes.io/projected/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-kube-api-access-ln84f\") pod \"nova-api-db-create-sckbb\" (UID: \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\") " pod="openstack/nova-api-db-create-sckbb" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.855869 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sckbb" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.870624 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-x8nht"] Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.871878 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.876100 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxrm\" (UniqueName: \"kubernetes.io/projected/20ab681f-51c2-4723-b5b6-58c841185455-kube-api-access-hvxrm\") pod \"nova-api-dcf9-account-create-update-2rmjd\" (UID: \"20ab681f-51c2-4723-b5b6-58c841185455\") " pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.876289 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ab681f-51c2-4723-b5b6-58c841185455-operator-scripts\") pod \"nova-api-dcf9-account-create-update-2rmjd\" (UID: \"20ab681f-51c2-4723-b5b6-58c841185455\") " pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.877214 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ab681f-51c2-4723-b5b6-58c841185455-operator-scripts\") pod \"nova-api-dcf9-account-create-update-2rmjd\" (UID: \"20ab681f-51c2-4723-b5b6-58c841185455\") " pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.932833 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x8nht"] Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.947308 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxrm\" (UniqueName: \"kubernetes.io/projected/20ab681f-51c2-4723-b5b6-58c841185455-kube-api-access-hvxrm\") pod \"nova-api-dcf9-account-create-update-2rmjd\" (UID: \"20ab681f-51c2-4723-b5b6-58c841185455\") " pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.961280 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-2dh9w"] Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.978901 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwnk2\" (UniqueName: \"kubernetes.io/projected/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-kube-api-access-zwnk2\") pod \"nova-cell0-db-create-x8nht\" (UID: \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\") " pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.979040 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-operator-scripts\") pod \"nova-cell0-db-create-x8nht\" (UID: \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\") " pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:33:56 crc kubenswrapper[4778]: I0312 13:33:56.982804 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.030481 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2dh9w"] Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.074515 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7d0f-account-create-update-t2rrl"] Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.076308 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.082335 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.090302 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klzfm\" (UniqueName: \"kubernetes.io/projected/068c02bc-1daf-4029-84f9-39a395d5de3e-kube-api-access-klzfm\") pod \"nova-cell1-db-create-2dh9w\" (UID: \"068c02bc-1daf-4029-84f9-39a395d5de3e\") " pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.090501 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c02bc-1daf-4029-84f9-39a395d5de3e-operator-scripts\") pod \"nova-cell1-db-create-2dh9w\" (UID: \"068c02bc-1daf-4029-84f9-39a395d5de3e\") " pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.090771 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwnk2\" (UniqueName: \"kubernetes.io/projected/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-kube-api-access-zwnk2\") pod \"nova-cell0-db-create-x8nht\" (UID: \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\") " pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.090931 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-operator-scripts\") pod \"nova-cell0-db-create-x8nht\" (UID: \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\") " pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.092143 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-operator-scripts\") pod \"nova-cell0-db-create-x8nht\" (UID: \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\") " pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.103952 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7d0f-account-create-update-t2rrl"] Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.133808 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwnk2\" (UniqueName: \"kubernetes.io/projected/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-kube-api-access-zwnk2\") pod \"nova-cell0-db-create-x8nht\" (UID: \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\") " pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.146174 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.193814 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klzfm\" (UniqueName: \"kubernetes.io/projected/068c02bc-1daf-4029-84f9-39a395d5de3e-kube-api-access-klzfm\") pod \"nova-cell1-db-create-2dh9w\" (UID: \"068c02bc-1daf-4029-84f9-39a395d5de3e\") " pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.193882 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/092c3556-0255-4e2f-b2c7-e22b8a3d8418-operator-scripts\") pod \"nova-cell0-7d0f-account-create-update-t2rrl\" (UID: \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\") " pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.193950 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c02bc-1daf-4029-84f9-39a395d5de3e-operator-scripts\") pod \"nova-cell1-db-create-2dh9w\" (UID: \"068c02bc-1daf-4029-84f9-39a395d5de3e\") " pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.193986 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfdqt\" (UniqueName: \"kubernetes.io/projected/092c3556-0255-4e2f-b2c7-e22b8a3d8418-kube-api-access-nfdqt\") pod \"nova-cell0-7d0f-account-create-update-t2rrl\" (UID: \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\") " pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.195754 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c02bc-1daf-4029-84f9-39a395d5de3e-operator-scripts\") pod \"nova-cell1-db-create-2dh9w\" (UID: \"068c02bc-1daf-4029-84f9-39a395d5de3e\") " pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.220731 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klzfm\" (UniqueName: \"kubernetes.io/projected/068c02bc-1daf-4029-84f9-39a395d5de3e-kube-api-access-klzfm\") pod \"nova-cell1-db-create-2dh9w\" (UID: \"068c02bc-1daf-4029-84f9-39a395d5de3e\") " pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.223290 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.294235 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-94ac-account-create-update-rxvgg"] Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.302501 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.305284 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.312133 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-94ac-account-create-update-rxvgg"] Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.313150 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/092c3556-0255-4e2f-b2c7-e22b8a3d8418-operator-scripts\") pod \"nova-cell0-7d0f-account-create-update-t2rrl\" (UID: \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\") " pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.313868 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfdqt\" (UniqueName: \"kubernetes.io/projected/092c3556-0255-4e2f-b2c7-e22b8a3d8418-kube-api-access-nfdqt\") pod \"nova-cell0-7d0f-account-create-update-t2rrl\" (UID: \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\") " pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.321490 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/092c3556-0255-4e2f-b2c7-e22b8a3d8418-operator-scripts\") pod \"nova-cell0-7d0f-account-create-update-t2rrl\" (UID: \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\") " pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.346573 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.354161 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfdqt\" (UniqueName: \"kubernetes.io/projected/092c3556-0255-4e2f-b2c7-e22b8a3d8418-kube-api-access-nfdqt\") pod \"nova-cell0-7d0f-account-create-update-t2rrl\" (UID: \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\") " pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.417643 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbbc5fa-b903-4296-a3af-75524920938d-operator-scripts\") pod \"nova-cell1-94ac-account-create-update-rxvgg\" (UID: \"9dbbc5fa-b903-4296-a3af-75524920938d\") " pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.417686 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6jh\" (UniqueName: \"kubernetes.io/projected/9dbbc5fa-b903-4296-a3af-75524920938d-kube-api-access-rn6jh\") pod \"nova-cell1-94ac-account-create-update-rxvgg\" (UID: \"9dbbc5fa-b903-4296-a3af-75524920938d\") " pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.443684 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.466486 4778 generic.go:334] "Generic (PLEG): container finished" podID="68092e68-04e5-4530-8d94-859789faeb94" containerID="be846a255557e511860dc7bc1b884d65bc6e48bfb1b98ae1316cb74617623c2b" exitCode=143 Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.466548 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79ccdbbbbd-gl27l" event={"ID":"68092e68-04e5-4530-8d94-859789faeb94","Type":"ContainerDied","Data":"be846a255557e511860dc7bc1b884d65bc6e48bfb1b98ae1316cb74617623c2b"} Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.496397 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerStarted","Data":"013c8995ca90639ba33078e18954ed308111d321639179db05cd00d19ef56702"} Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.496874 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.521436 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbbc5fa-b903-4296-a3af-75524920938d-operator-scripts\") pod \"nova-cell1-94ac-account-create-update-rxvgg\" (UID: \"9dbbc5fa-b903-4296-a3af-75524920938d\") " pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.521501 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6jh\" (UniqueName: \"kubernetes.io/projected/9dbbc5fa-b903-4296-a3af-75524920938d-kube-api-access-rn6jh\") pod \"nova-cell1-94ac-account-create-update-rxvgg\" (UID: \"9dbbc5fa-b903-4296-a3af-75524920938d\") " pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.522156 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbbc5fa-b903-4296-a3af-75524920938d-operator-scripts\") pod \"nova-cell1-94ac-account-create-update-rxvgg\" (UID: \"9dbbc5fa-b903-4296-a3af-75524920938d\") " pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.551789 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6jh\" (UniqueName: \"kubernetes.io/projected/9dbbc5fa-b903-4296-a3af-75524920938d-kube-api-access-rn6jh\") pod \"nova-cell1-94ac-account-create-update-rxvgg\" (UID: \"9dbbc5fa-b903-4296-a3af-75524920938d\") " pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.564329 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.420564405 podStartE2EDuration="6.564309775s" podCreationTimestamp="2026-03-12 13:33:51 +0000 UTC" firstStartedPulling="2026-03-12 13:33:52.603779366 +0000 UTC m=+1451.052474762" lastFinishedPulling="2026-03-12 13:33:56.747524736 +0000 UTC m=+1455.196220132" observedRunningTime="2026-03-12 13:33:57.517081986 +0000 UTC m=+1455.965777382" watchObservedRunningTime="2026-03-12 13:33:57.564309775 +0000 UTC m=+1456.013005171" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.608627 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sckbb"] Mar 12 13:33:57 crc kubenswrapper[4778]: W0312 13:33:57.641936 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d11f6c3_3911_4a29_a65d_ef1f570d9b02.slice/crio-4880f510662a5f7132d196ce16de62950ef057f7d7af9da4c281a5ecfaedcca2 WatchSource:0}: Error finding container 4880f510662a5f7132d196ce16de62950ef057f7d7af9da4c281a5ecfaedcca2: Status 404 returned error can't find the container with id 4880f510662a5f7132d196ce16de62950ef057f7d7af9da4c281a5ecfaedcca2 Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.791862 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:33:57 crc kubenswrapper[4778]: I0312 13:33:57.931757 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x8nht"] Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.017416 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-dcf9-account-create-update-2rmjd"] Mar 12 13:33:58 crc kubenswrapper[4778]: W0312 13:33:58.032110 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20ab681f_51c2_4723_b5b6_58c841185455.slice/crio-82b63dc5beaf8ac79964255d2c25b2ecfed64153aeb0366bfed3a3b034e6a33d WatchSource:0}: Error finding container 82b63dc5beaf8ac79964255d2c25b2ecfed64153aeb0366bfed3a3b034e6a33d: Status 404 returned error can't find the container with id 82b63dc5beaf8ac79964255d2c25b2ecfed64153aeb0366bfed3a3b034e6a33d Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.219399 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7d0f-account-create-update-t2rrl"] Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.245882 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-2dh9w"] Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.285479 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-94ac-account-create-update-rxvgg"] Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.526463 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dcf9-account-create-update-2rmjd" event={"ID":"20ab681f-51c2-4723-b5b6-58c841185455","Type":"ContainerStarted","Data":"82b63dc5beaf8ac79964255d2c25b2ecfed64153aeb0366bfed3a3b034e6a33d"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.531042 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.531481 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerName="glance-log" containerID="cri-o://0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260" gracePeriod=30 Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.531712 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerName="glance-httpd" containerID="cri-o://7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f" gracePeriod=30 Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.542060 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" event={"ID":"9dbbc5fa-b903-4296-a3af-75524920938d","Type":"ContainerStarted","Data":"12e7d50aae58d04731b12f84ab30471cfbbda676ec2f653ddf3e5df4decd21b9"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.560673 4778 generic.go:334] "Generic (PLEG): container finished" podID="2d11f6c3-3911-4a29-a65d-ef1f570d9b02" containerID="bc734e634b97b1a5646716a6fc635d874255724a3ef890cee0802c7190db7d7c" exitCode=0 Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.561144 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sckbb" event={"ID":"2d11f6c3-3911-4a29-a65d-ef1f570d9b02","Type":"ContainerDied","Data":"bc734e634b97b1a5646716a6fc635d874255724a3ef890cee0802c7190db7d7c"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.561177 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sckbb" event={"ID":"2d11f6c3-3911-4a29-a65d-ef1f570d9b02","Type":"ContainerStarted","Data":"4880f510662a5f7132d196ce16de62950ef057f7d7af9da4c281a5ecfaedcca2"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.563475 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" event={"ID":"092c3556-0255-4e2f-b2c7-e22b8a3d8418","Type":"ContainerStarted","Data":"d33dd868a9cad44b520d800f2c0ee298247a48b4345e044c0c3ed20a58fc82c5"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.567485 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x8nht" event={"ID":"4ad4ff5d-b816-4bdd-97a7-8afd73afe583","Type":"ContainerStarted","Data":"e2a8d1e05ff7ff80a86b71f26e5fb5c7484878b8a9632420829088d85ad0fbaf"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.567531 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x8nht" event={"ID":"4ad4ff5d-b816-4bdd-97a7-8afd73afe583","Type":"ContainerStarted","Data":"d10b3450ecd206da6b0e8847141e39d6a2c9d193ad5e15dd86118dc2ded48a9f"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.568987 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2dh9w" event={"ID":"068c02bc-1daf-4029-84f9-39a395d5de3e","Type":"ContainerStarted","Data":"9ffe001ed68c4701eb0b1fec6393649f539478d01421e0e3619aa9a88a221722"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.578912 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7fa757af-1c91-4b93-8916-5bbd99b8522e","Type":"ContainerStarted","Data":"91d9bb0428b135b688074df3a50b06e035e7a170ffbeb2f7181ae94e57fada1b"} Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.629740 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.629719716 podStartE2EDuration="4.629719716s" podCreationTimestamp="2026-03-12 13:33:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:58.624394405 +0000 UTC m=+1457.073089821" watchObservedRunningTime="2026-03-12 13:33:58.629719716 +0000 UTC m=+1457.078415122" Mar 12 13:33:58 crc kubenswrapper[4778]: I0312 13:33:58.644539 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-x8nht" podStartSLOduration=2.644517186 podStartE2EDuration="2.644517186s" podCreationTimestamp="2026-03-12 13:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:33:58.641199692 +0000 UTC m=+1457.089895088" watchObservedRunningTime="2026-03-12 13:33:58.644517186 +0000 UTC m=+1457.093212582" Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.588785 4778 generic.go:334] "Generic (PLEG): container finished" podID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerID="0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260" exitCode=143 Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.588995 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac92f5c5-e457-4915-a919-0dbe3df23ce8","Type":"ContainerDied","Data":"0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260"} Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.590568 4778 generic.go:334] "Generic (PLEG): container finished" podID="4ad4ff5d-b816-4bdd-97a7-8afd73afe583" containerID="e2a8d1e05ff7ff80a86b71f26e5fb5c7484878b8a9632420829088d85ad0fbaf" exitCode=0 Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.590638 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x8nht" event={"ID":"4ad4ff5d-b816-4bdd-97a7-8afd73afe583","Type":"ContainerDied","Data":"e2a8d1e05ff7ff80a86b71f26e5fb5c7484878b8a9632420829088d85ad0fbaf"} Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.594197 4778 generic.go:334] "Generic (PLEG): container finished" podID="068c02bc-1daf-4029-84f9-39a395d5de3e" containerID="37e7dd5198914cc6a22b8658dd88edbdbdabb2bfe43c9c4d07a686c73a997ca2" exitCode=0 Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.594261 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2dh9w" event={"ID":"068c02bc-1daf-4029-84f9-39a395d5de3e","Type":"ContainerDied","Data":"37e7dd5198914cc6a22b8658dd88edbdbdabb2bfe43c9c4d07a686c73a997ca2"} Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.596910 4778 generic.go:334] "Generic (PLEG): container finished" podID="20ab681f-51c2-4723-b5b6-58c841185455" containerID="be1aecf0f9c3a392b6320f1bb26caafd070dc71ba9db9be7a31ee5daf79e1a2d" exitCode=0 Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.596986 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dcf9-account-create-update-2rmjd" event={"ID":"20ab681f-51c2-4723-b5b6-58c841185455","Type":"ContainerDied","Data":"be1aecf0f9c3a392b6320f1bb26caafd070dc71ba9db9be7a31ee5daf79e1a2d"} Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.598629 4778 generic.go:334] "Generic (PLEG): container finished" podID="9dbbc5fa-b903-4296-a3af-75524920938d" containerID="099862fee239f9d58b6485a586d53c0613281de24cf1629f41917394af426901" exitCode=0 Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.598731 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" event={"ID":"9dbbc5fa-b903-4296-a3af-75524920938d","Type":"ContainerDied","Data":"099862fee239f9d58b6485a586d53c0613281de24cf1629f41917394af426901"} Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.602846 4778 generic.go:334] "Generic (PLEG): container finished" podID="092c3556-0255-4e2f-b2c7-e22b8a3d8418" containerID="2919ec7bf1dcd65b4aaec3b3c75478bba66c6d492f7b5c0064c9c993485c3e21" exitCode=0 Mar 12 13:33:59 crc kubenswrapper[4778]: I0312 13:33:59.603052 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" event={"ID":"092c3556-0255-4e2f-b2c7-e22b8a3d8418","Type":"ContainerDied","Data":"2919ec7bf1dcd65b4aaec3b3c75478bba66c6d492f7b5c0064c9c993485c3e21"} Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.112423 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sckbb" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.148813 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555374-lf8vj"] Mar 12 13:34:00 crc kubenswrapper[4778]: E0312 13:34:00.149390 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d11f6c3-3911-4a29-a65d-ef1f570d9b02" containerName="mariadb-database-create" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.149406 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d11f6c3-3911-4a29-a65d-ef1f570d9b02" containerName="mariadb-database-create" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.149595 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d11f6c3-3911-4a29-a65d-ef1f570d9b02" containerName="mariadb-database-create" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.150421 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555374-lf8vj" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.152819 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.155603 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.155833 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.168465 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555374-lf8vj"] Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.218217 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln84f\" (UniqueName: \"kubernetes.io/projected/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-kube-api-access-ln84f\") pod \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\" (UID: \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\") " Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.218309 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-operator-scripts\") pod \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\" (UID: \"2d11f6c3-3911-4a29-a65d-ef1f570d9b02\") " Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.218682 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9h9r\" (UniqueName: \"kubernetes.io/projected/9d627011-802e-4075-9c56-43373d4c368e-kube-api-access-r9h9r\") pod \"auto-csr-approver-29555374-lf8vj\" (UID: \"9d627011-802e-4075-9c56-43373d4c368e\") " pod="openshift-infra/auto-csr-approver-29555374-lf8vj" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.219153 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d11f6c3-3911-4a29-a65d-ef1f570d9b02" (UID: "2d11f6c3-3911-4a29-a65d-ef1f570d9b02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.230571 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-kube-api-access-ln84f" (OuterVolumeSpecName: "kube-api-access-ln84f") pod "2d11f6c3-3911-4a29-a65d-ef1f570d9b02" (UID: "2d11f6c3-3911-4a29-a65d-ef1f570d9b02"). InnerVolumeSpecName "kube-api-access-ln84f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.326226 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9h9r\" (UniqueName: \"kubernetes.io/projected/9d627011-802e-4075-9c56-43373d4c368e-kube-api-access-r9h9r\") pod \"auto-csr-approver-29555374-lf8vj\" (UID: \"9d627011-802e-4075-9c56-43373d4c368e\") " pod="openshift-infra/auto-csr-approver-29555374-lf8vj" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.326645 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln84f\" (UniqueName: \"kubernetes.io/projected/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-kube-api-access-ln84f\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.326665 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d11f6c3-3911-4a29-a65d-ef1f570d9b02-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.349106 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9h9r\" (UniqueName: \"kubernetes.io/projected/9d627011-802e-4075-9c56-43373d4c368e-kube-api-access-r9h9r\") pod \"auto-csr-approver-29555374-lf8vj\" (UID: \"9d627011-802e-4075-9c56-43373d4c368e\") " pod="openshift-infra/auto-csr-approver-29555374-lf8vj" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.471737 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555374-lf8vj" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.615342 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sckbb" Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.616149 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sckbb" event={"ID":"2d11f6c3-3911-4a29-a65d-ef1f570d9b02","Type":"ContainerDied","Data":"4880f510662a5f7132d196ce16de62950ef057f7d7af9da4c281a5ecfaedcca2"} Mar 12 13:34:00 crc kubenswrapper[4778]: I0312 13:34:00.616173 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4880f510662a5f7132d196ce16de62950ef057f7d7af9da4c281a5ecfaedcca2" Mar 12 13:34:01 crc kubenswrapper[4778]: E0312 13:34:01.131683 4778 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.490158 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.505886 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.560129 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.560315 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.580491 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555374-lf8vj"] Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.584954 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.605899 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfdqt\" (UniqueName: \"kubernetes.io/projected/092c3556-0255-4e2f-b2c7-e22b8a3d8418-kube-api-access-nfdqt\") pod \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\" (UID: \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.606052 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn6jh\" (UniqueName: \"kubernetes.io/projected/9dbbc5fa-b903-4296-a3af-75524920938d-kube-api-access-rn6jh\") pod \"9dbbc5fa-b903-4296-a3af-75524920938d\" (UID: \"9dbbc5fa-b903-4296-a3af-75524920938d\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.607137 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/092c3556-0255-4e2f-b2c7-e22b8a3d8418-operator-scripts\") pod \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\" (UID: \"092c3556-0255-4e2f-b2c7-e22b8a3d8418\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.607290 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbbc5fa-b903-4296-a3af-75524920938d-operator-scripts\") pod \"9dbbc5fa-b903-4296-a3af-75524920938d\" (UID: \"9dbbc5fa-b903-4296-a3af-75524920938d\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.614734 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092c3556-0255-4e2f-b2c7-e22b8a3d8418-kube-api-access-nfdqt" (OuterVolumeSpecName: "kube-api-access-nfdqt") pod "092c3556-0255-4e2f-b2c7-e22b8a3d8418" (UID: "092c3556-0255-4e2f-b2c7-e22b8a3d8418"). InnerVolumeSpecName "kube-api-access-nfdqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.615335 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbbc5fa-b903-4296-a3af-75524920938d-kube-api-access-rn6jh" (OuterVolumeSpecName: "kube-api-access-rn6jh") pod "9dbbc5fa-b903-4296-a3af-75524920938d" (UID: "9dbbc5fa-b903-4296-a3af-75524920938d"). InnerVolumeSpecName "kube-api-access-rn6jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.615579 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbbc5fa-b903-4296-a3af-75524920938d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9dbbc5fa-b903-4296-a3af-75524920938d" (UID: "9dbbc5fa-b903-4296-a3af-75524920938d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.615635 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/092c3556-0255-4e2f-b2c7-e22b8a3d8418-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "092c3556-0255-4e2f-b2c7-e22b8a3d8418" (UID: "092c3556-0255-4e2f-b2c7-e22b8a3d8418"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.654255 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-2dh9w" event={"ID":"068c02bc-1daf-4029-84f9-39a395d5de3e","Type":"ContainerDied","Data":"9ffe001ed68c4701eb0b1fec6393649f539478d01421e0e3619aa9a88a221722"} Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.654558 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ffe001ed68c4701eb0b1fec6393649f539478d01421e0e3619aa9a88a221722" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.654705 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-2dh9w" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.664456 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-dcf9-account-create-update-2rmjd" event={"ID":"20ab681f-51c2-4723-b5b6-58c841185455","Type":"ContainerDied","Data":"82b63dc5beaf8ac79964255d2c25b2ecfed64153aeb0366bfed3a3b034e6a33d"} Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.664678 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b63dc5beaf8ac79964255d2c25b2ecfed64153aeb0366bfed3a3b034e6a33d" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.664967 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-dcf9-account-create-update-2rmjd" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.667537 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" event={"ID":"9dbbc5fa-b903-4296-a3af-75524920938d","Type":"ContainerDied","Data":"12e7d50aae58d04731b12f84ab30471cfbbda676ec2f653ddf3e5df4decd21b9"} Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.667649 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12e7d50aae58d04731b12f84ab30471cfbbda676ec2f653ddf3e5df4decd21b9" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.667795 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-94ac-account-create-update-rxvgg" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.672837 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" event={"ID":"092c3556-0255-4e2f-b2c7-e22b8a3d8418","Type":"ContainerDied","Data":"d33dd868a9cad44b520d800f2c0ee298247a48b4345e044c0c3ed20a58fc82c5"} Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.673248 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33dd868a9cad44b520d800f2c0ee298247a48b4345e044c0c3ed20a58fc82c5" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.673360 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7d0f-account-create-update-t2rrl" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.681234 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555374-lf8vj" event={"ID":"9d627011-802e-4075-9c56-43373d4c368e","Type":"ContainerStarted","Data":"67e6d8b5b310afbd3ec49c3d001f4cd624f27618d4c040503f6dc47cb73ea130"} Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.690553 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x8nht" event={"ID":"4ad4ff5d-b816-4bdd-97a7-8afd73afe583","Type":"ContainerDied","Data":"d10b3450ecd206da6b0e8847141e39d6a2c9d193ad5e15dd86118dc2ded48a9f"} Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.690773 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d10b3450ecd206da6b0e8847141e39d6a2c9d193ad5e15dd86118dc2ded48a9f" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.690885 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x8nht" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.702339 4778 generic.go:334] "Generic (PLEG): container finished" podID="68092e68-04e5-4530-8d94-859789faeb94" containerID="8cdda802eadd8c68b3ba4b5b69b6a0fd021902af043f1083daaae42e4e3ba4bc" exitCode=0 Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.702454 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79ccdbbbbd-gl27l" event={"ID":"68092e68-04e5-4530-8d94-859789faeb94","Type":"ContainerDied","Data":"8cdda802eadd8c68b3ba4b5b69b6a0fd021902af043f1083daaae42e4e3ba4bc"} Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.708693 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klzfm\" (UniqueName: \"kubernetes.io/projected/068c02bc-1daf-4029-84f9-39a395d5de3e-kube-api-access-klzfm\") pod \"068c02bc-1daf-4029-84f9-39a395d5de3e\" (UID: \"068c02bc-1daf-4029-84f9-39a395d5de3e\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.709025 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ab681f-51c2-4723-b5b6-58c841185455-operator-scripts\") pod \"20ab681f-51c2-4723-b5b6-58c841185455\" (UID: \"20ab681f-51c2-4723-b5b6-58c841185455\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.709238 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvxrm\" (UniqueName: \"kubernetes.io/projected/20ab681f-51c2-4723-b5b6-58c841185455-kube-api-access-hvxrm\") pod \"20ab681f-51c2-4723-b5b6-58c841185455\" (UID: \"20ab681f-51c2-4723-b5b6-58c841185455\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.709345 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c02bc-1daf-4029-84f9-39a395d5de3e-operator-scripts\") pod \"068c02bc-1daf-4029-84f9-39a395d5de3e\" (UID: \"068c02bc-1daf-4029-84f9-39a395d5de3e\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.709584 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-operator-scripts\") pod \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\" (UID: \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.709763 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwnk2\" (UniqueName: \"kubernetes.io/projected/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-kube-api-access-zwnk2\") pod \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\" (UID: \"4ad4ff5d-b816-4bdd-97a7-8afd73afe583\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.710438 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/092c3556-0255-4e2f-b2c7-e22b8a3d8418-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.710783 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dbbc5fa-b903-4296-a3af-75524920938d-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.710880 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfdqt\" (UniqueName: \"kubernetes.io/projected/092c3556-0255-4e2f-b2c7-e22b8a3d8418-kube-api-access-nfdqt\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.710961 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn6jh\" (UniqueName: \"kubernetes.io/projected/9dbbc5fa-b903-4296-a3af-75524920938d-kube-api-access-rn6jh\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.713591 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068c02bc-1daf-4029-84f9-39a395d5de3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "068c02bc-1daf-4029-84f9-39a395d5de3e" (UID: "068c02bc-1daf-4029-84f9-39a395d5de3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.713661 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ab681f-51c2-4723-b5b6-58c841185455-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20ab681f-51c2-4723-b5b6-58c841185455" (UID: "20ab681f-51c2-4723-b5b6-58c841185455"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.713970 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ad4ff5d-b816-4bdd-97a7-8afd73afe583" (UID: "4ad4ff5d-b816-4bdd-97a7-8afd73afe583"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.716080 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-kube-api-access-zwnk2" (OuterVolumeSpecName: "kube-api-access-zwnk2") pod "4ad4ff5d-b816-4bdd-97a7-8afd73afe583" (UID: "4ad4ff5d-b816-4bdd-97a7-8afd73afe583"). InnerVolumeSpecName "kube-api-access-zwnk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.717040 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068c02bc-1daf-4029-84f9-39a395d5de3e-kube-api-access-klzfm" (OuterVolumeSpecName: "kube-api-access-klzfm") pod "068c02bc-1daf-4029-84f9-39a395d5de3e" (UID: "068c02bc-1daf-4029-84f9-39a395d5de3e"). InnerVolumeSpecName "kube-api-access-klzfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.719285 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ab681f-51c2-4723-b5b6-58c841185455-kube-api-access-hvxrm" (OuterVolumeSpecName: "kube-api-access-hvxrm") pod "20ab681f-51c2-4723-b5b6-58c841185455" (UID: "20ab681f-51c2-4723-b5b6-58c841185455"). InnerVolumeSpecName "kube-api-access-hvxrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.722305 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.812477 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68092e68-04e5-4530-8d94-859789faeb94-logs\") pod \"68092e68-04e5-4530-8d94-859789faeb94\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.812833 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-public-tls-certs\") pod \"68092e68-04e5-4530-8d94-859789faeb94\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.812966 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-combined-ca-bundle\") pod \"68092e68-04e5-4530-8d94-859789faeb94\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.813817 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-config-data\") pod \"68092e68-04e5-4530-8d94-859789faeb94\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.814118 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnrxw\" (UniqueName: \"kubernetes.io/projected/68092e68-04e5-4530-8d94-859789faeb94-kube-api-access-jnrxw\") pod \"68092e68-04e5-4530-8d94-859789faeb94\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.813413 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68092e68-04e5-4530-8d94-859789faeb94-logs" (OuterVolumeSpecName: "logs") pod "68092e68-04e5-4530-8d94-859789faeb94" (UID: "68092e68-04e5-4530-8d94-859789faeb94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.814482 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-internal-tls-certs\") pod \"68092e68-04e5-4530-8d94-859789faeb94\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.814989 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-scripts\") pod \"68092e68-04e5-4530-8d94-859789faeb94\" (UID: \"68092e68-04e5-4530-8d94-859789faeb94\") " Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.816511 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.816922 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwnk2\" (UniqueName: \"kubernetes.io/projected/4ad4ff5d-b816-4bdd-97a7-8afd73afe583-kube-api-access-zwnk2\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.817043 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klzfm\" (UniqueName: \"kubernetes.io/projected/068c02bc-1daf-4029-84f9-39a395d5de3e-kube-api-access-klzfm\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.817361 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68092e68-04e5-4530-8d94-859789faeb94-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.817825 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ab681f-51c2-4723-b5b6-58c841185455-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.817933 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvxrm\" (UniqueName: \"kubernetes.io/projected/20ab681f-51c2-4723-b5b6-58c841185455-kube-api-access-hvxrm\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.818241 4778 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/068c02bc-1daf-4029-84f9-39a395d5de3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.819813 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-scripts" (OuterVolumeSpecName: "scripts") pod "68092e68-04e5-4530-8d94-859789faeb94" (UID: "68092e68-04e5-4530-8d94-859789faeb94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.820985 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68092e68-04e5-4530-8d94-859789faeb94-kube-api-access-jnrxw" (OuterVolumeSpecName: "kube-api-access-jnrxw") pod "68092e68-04e5-4530-8d94-859789faeb94" (UID: "68092e68-04e5-4530-8d94-859789faeb94"). InnerVolumeSpecName "kube-api-access-jnrxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.902739 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-config-data" (OuterVolumeSpecName: "config-data") pod "68092e68-04e5-4530-8d94-859789faeb94" (UID: "68092e68-04e5-4530-8d94-859789faeb94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.907414 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68092e68-04e5-4530-8d94-859789faeb94" (UID: "68092e68-04e5-4530-8d94-859789faeb94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.920148 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.920208 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.920222 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnrxw\" (UniqueName: \"kubernetes.io/projected/68092e68-04e5-4530-8d94-859789faeb94-kube-api-access-jnrxw\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.920236 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.944982 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "68092e68-04e5-4530-8d94-859789faeb94" (UID: "68092e68-04e5-4530-8d94-859789faeb94"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:01 crc kubenswrapper[4778]: I0312 13:34:01.966261 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "68092e68-04e5-4530-8d94-859789faeb94" (UID: "68092e68-04e5-4530-8d94-859789faeb94"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.022566 4778 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.022645 4778 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/68092e68-04e5-4530-8d94-859789faeb94-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.107714 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.227736 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.227804 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-logs\") pod \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.227935 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-public-tls-certs\") pod \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.227960 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-httpd-run\") pod \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.227977 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pklz\" (UniqueName: \"kubernetes.io/projected/ac92f5c5-e457-4915-a919-0dbe3df23ce8-kube-api-access-5pklz\") pod \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.228035 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-combined-ca-bundle\") pod \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.228125 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-config-data\") pod \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.228171 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-scripts\") pod \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\" (UID: \"ac92f5c5-e457-4915-a919-0dbe3df23ce8\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.230891 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ac92f5c5-e457-4915-a919-0dbe3df23ce8" (UID: "ac92f5c5-e457-4915-a919-0dbe3df23ce8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.233030 4778 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.233621 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-logs" (OuterVolumeSpecName: "logs") pod "ac92f5c5-e457-4915-a919-0dbe3df23ce8" (UID: "ac92f5c5-e457-4915-a919-0dbe3df23ce8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.236444 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "ac92f5c5-e457-4915-a919-0dbe3df23ce8" (UID: "ac92f5c5-e457-4915-a919-0dbe3df23ce8"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.251427 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-scripts" (OuterVolumeSpecName: "scripts") pod "ac92f5c5-e457-4915-a919-0dbe3df23ce8" (UID: "ac92f5c5-e457-4915-a919-0dbe3df23ce8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.259044 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac92f5c5-e457-4915-a919-0dbe3df23ce8-kube-api-access-5pklz" (OuterVolumeSpecName: "kube-api-access-5pklz") pod "ac92f5c5-e457-4915-a919-0dbe3df23ce8" (UID: "ac92f5c5-e457-4915-a919-0dbe3df23ce8"). InnerVolumeSpecName "kube-api-access-5pklz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.286093 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac92f5c5-e457-4915-a919-0dbe3df23ce8" (UID: "ac92f5c5-e457-4915-a919-0dbe3df23ce8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.324471 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ac92f5c5-e457-4915-a919-0dbe3df23ce8" (UID: "ac92f5c5-e457-4915-a919-0dbe3df23ce8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.334719 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.334778 4778 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.334795 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac92f5c5-e457-4915-a919-0dbe3df23ce8-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.334806 4778 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.335010 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pklz\" (UniqueName: \"kubernetes.io/projected/ac92f5c5-e457-4915-a919-0dbe3df23ce8-kube-api-access-5pklz\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.335023 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.352442 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-config-data" (OuterVolumeSpecName: "config-data") pod "ac92f5c5-e457-4915-a919-0dbe3df23ce8" (UID: "ac92f5c5-e457-4915-a919-0dbe3df23ce8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.359931 4778 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.436440 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac92f5c5-e457-4915-a919-0dbe3df23ce8-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.436473 4778 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.716132 4778 generic.go:334] "Generic (PLEG): container finished" podID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerID="7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f" exitCode=0 Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.716213 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac92f5c5-e457-4915-a919-0dbe3df23ce8","Type":"ContainerDied","Data":"7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f"} Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.716244 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac92f5c5-e457-4915-a919-0dbe3df23ce8","Type":"ContainerDied","Data":"de7bb235534c3c0c1a6530e35fd6d03d222f02129ca88b49fda3a8c136ab05b7"} Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.716263 4778 scope.go:117] "RemoveContainer" containerID="7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.716401 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.722845 4778 generic.go:334] "Generic (PLEG): container finished" podID="e34be903-da25-4cdb-9298-2d53fdce0276" containerID="76d710be6da7b239e82f6228977b9799ccd95f2824b23913a0585897e926dd74" exitCode=0 Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.722923 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678c76989b-8x56d" event={"ID":"e34be903-da25-4cdb-9298-2d53fdce0276","Type":"ContainerDied","Data":"76d710be6da7b239e82f6228977b9799ccd95f2824b23913a0585897e926dd74"} Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.730915 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79ccdbbbbd-gl27l" event={"ID":"68092e68-04e5-4530-8d94-859789faeb94","Type":"ContainerDied","Data":"6225b0b7ab31929807b7000d1c797565cb38b8453f9487cc91d0a8fcf517ace6"} Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.731023 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79ccdbbbbd-gl27l" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.804836 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.823771 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.827777 4778 scope.go:117] "RemoveContainer" containerID="0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.846718 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.869486 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-79ccdbbbbd-gl27l"] Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.881312 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883282 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerName="glance-httpd" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883316 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerName="glance-httpd" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883356 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ab681f-51c2-4723-b5b6-58c841185455" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883369 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ab681f-51c2-4723-b5b6-58c841185455" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883393 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68092e68-04e5-4530-8d94-859789faeb94" containerName="placement-log" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883406 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="68092e68-04e5-4530-8d94-859789faeb94" containerName="placement-log" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883442 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="092c3556-0255-4e2f-b2c7-e22b8a3d8418" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883454 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="092c3556-0255-4e2f-b2c7-e22b8a3d8418" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883468 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" containerName="neutron-api" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883480 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" containerName="neutron-api" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883504 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbbc5fa-b903-4296-a3af-75524920938d" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883512 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbbc5fa-b903-4296-a3af-75524920938d" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883522 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068c02bc-1daf-4029-84f9-39a395d5de3e" containerName="mariadb-database-create" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883531 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="068c02bc-1daf-4029-84f9-39a395d5de3e" containerName="mariadb-database-create" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883562 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerName="glance-log" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883575 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerName="glance-log" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883602 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" containerName="neutron-httpd" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883615 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" containerName="neutron-httpd" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883645 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad4ff5d-b816-4bdd-97a7-8afd73afe583" containerName="mariadb-database-create" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883654 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad4ff5d-b816-4bdd-97a7-8afd73afe583" containerName="mariadb-database-create" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.883667 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68092e68-04e5-4530-8d94-859789faeb94" containerName="placement-api" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.883675 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="68092e68-04e5-4530-8d94-859789faeb94" containerName="placement-api" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885383 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="092c3556-0255-4e2f-b2c7-e22b8a3d8418" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885410 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerName="glance-log" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885420 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="68092e68-04e5-4530-8d94-859789faeb94" containerName="placement-api" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885431 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad4ff5d-b816-4bdd-97a7-8afd73afe583" containerName="mariadb-database-create" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885456 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbbc5fa-b903-4296-a3af-75524920938d" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885467 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" containerName="neutron-httpd" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885488 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="068c02bc-1daf-4029-84f9-39a395d5de3e" containerName="mariadb-database-create" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885508 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ab681f-51c2-4723-b5b6-58c841185455" containerName="mariadb-account-create-update" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885530 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" containerName="glance-httpd" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885544 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" containerName="neutron-api" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.885560 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="68092e68-04e5-4530-8d94-859789faeb94" containerName="placement-log" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.902931 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.910626 4778 scope.go:117] "RemoveContainer" containerID="7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.911097 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.911268 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.911540 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f\": container with ID starting with 7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f not found: ID does not exist" containerID="7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.913232 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f"} err="failed to get container status \"7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f\": rpc error: code = NotFound desc = could not find container \"7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f\": container with ID starting with 7ffa53ad28bc0a9b2bc486f510ba4d00037b24ab1288b231f7af2e96baabc19f not found: ID does not exist" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.913305 4778 scope.go:117] "RemoveContainer" containerID="0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.911602 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-79ccdbbbbd-gl27l"] Mar 12 13:34:02 crc kubenswrapper[4778]: E0312 13:34:02.924334 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260\": container with ID starting with 0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260 not found: ID does not exist" containerID="0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.924426 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260"} err="failed to get container status \"0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260\": rpc error: code = NotFound desc = could not find container \"0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260\": container with ID starting with 0aad8b26d122f41726307150aa529e2cdb5f197081e19ece7255f5b8aa07d260 not found: ID does not exist" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.924472 4778 scope.go:117] "RemoveContainer" containerID="8cdda802eadd8c68b3ba4b5b69b6a0fd021902af043f1083daaae42e4e3ba4bc" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.941117 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.950456 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-combined-ca-bundle\") pod \"e34be903-da25-4cdb-9298-2d53fdce0276\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.950570 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-httpd-config\") pod \"e34be903-da25-4cdb-9298-2d53fdce0276\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.950649 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-config\") pod \"e34be903-da25-4cdb-9298-2d53fdce0276\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.950734 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-ovndb-tls-certs\") pod \"e34be903-da25-4cdb-9298-2d53fdce0276\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.950787 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgmt8\" (UniqueName: \"kubernetes.io/projected/e34be903-da25-4cdb-9298-2d53fdce0276-kube-api-access-cgmt8\") pod \"e34be903-da25-4cdb-9298-2d53fdce0276\" (UID: \"e34be903-da25-4cdb-9298-2d53fdce0276\") " Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.957617 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e34be903-da25-4cdb-9298-2d53fdce0276" (UID: "e34be903-da25-4cdb-9298-2d53fdce0276"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.958301 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34be903-da25-4cdb-9298-2d53fdce0276-kube-api-access-cgmt8" (OuterVolumeSpecName: "kube-api-access-cgmt8") pod "e34be903-da25-4cdb-9298-2d53fdce0276" (UID: "e34be903-da25-4cdb-9298-2d53fdce0276"). InnerVolumeSpecName "kube-api-access-cgmt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:02 crc kubenswrapper[4778]: I0312 13:34:02.971538 4778 scope.go:117] "RemoveContainer" containerID="be846a255557e511860dc7bc1b884d65bc6e48bfb1b98ae1316cb74617623c2b" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.021109 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e34be903-da25-4cdb-9298-2d53fdce0276" (UID: "e34be903-da25-4cdb-9298-2d53fdce0276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.025739 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-config" (OuterVolumeSpecName: "config") pod "e34be903-da25-4cdb-9298-2d53fdce0276" (UID: "e34be903-da25-4cdb-9298-2d53fdce0276"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.040722 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e34be903-da25-4cdb-9298-2d53fdce0276" (UID: "e34be903-da25-4cdb-9298-2d53fdce0276"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057233 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tjbq\" (UniqueName: \"kubernetes.io/projected/81c1a05c-5642-43d4-8a7b-229330168332-kube-api-access-9tjbq\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057305 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057370 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057461 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-scripts\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057522 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81c1a05c-5642-43d4-8a7b-229330168332-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057543 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81c1a05c-5642-43d4-8a7b-229330168332-logs\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057563 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-config-data\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057587 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057639 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057662 4778 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057672 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057683 4778 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e34be903-da25-4cdb-9298-2d53fdce0276-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.057692 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgmt8\" (UniqueName: \"kubernetes.io/projected/e34be903-da25-4cdb-9298-2d53fdce0276-kube-api-access-cgmt8\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.158859 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-scripts\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.158925 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81c1a05c-5642-43d4-8a7b-229330168332-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.158954 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81c1a05c-5642-43d4-8a7b-229330168332-logs\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.158977 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-config-data\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.159009 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.159038 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tjbq\" (UniqueName: \"kubernetes.io/projected/81c1a05c-5642-43d4-8a7b-229330168332-kube-api-access-9tjbq\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.159086 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.159150 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.160325 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/81c1a05c-5642-43d4-8a7b-229330168332-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.160364 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81c1a05c-5642-43d4-8a7b-229330168332-logs\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.160916 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") device mount path \"/mnt/openstack/pv17\"" pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.164146 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.172730 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-config-data\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.177527 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.178204 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81c1a05c-5642-43d4-8a7b-229330168332-scripts\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.180354 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tjbq\" (UniqueName: \"kubernetes.io/projected/81c1a05c-5642-43d4-8a7b-229330168332-kube-api-access-9tjbq\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.202915 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"81c1a05c-5642-43d4-8a7b-229330168332\") " pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.233540 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.740730 4778 generic.go:334] "Generic (PLEG): container finished" podID="9d627011-802e-4075-9c56-43373d4c368e" containerID="d817d5a09b7856e71332e283d84fe3ea296ae040cb7e986cd73c433864a99c34" exitCode=0 Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.741162 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555374-lf8vj" event={"ID":"9d627011-802e-4075-9c56-43373d4c368e","Type":"ContainerDied","Data":"d817d5a09b7856e71332e283d84fe3ea296ae040cb7e986cd73c433864a99c34"} Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.744904 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-678c76989b-8x56d" event={"ID":"e34be903-da25-4cdb-9298-2d53fdce0276","Type":"ContainerDied","Data":"73ff3b874391ffdc31812d5d85f13741c2920b13dddb21f9bdace835187b0822"} Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.744941 4778 scope.go:117] "RemoveContainer" containerID="7423051fcfb7c12e56b049e90be94c641f82520ceab5181c7fcca6713588c77f" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.745048 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-678c76989b-8x56d" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.777990 4778 scope.go:117] "RemoveContainer" containerID="76d710be6da7b239e82f6228977b9799ccd95f2824b23913a0585897e926dd74" Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.809730 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.818407 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-678c76989b-8x56d"] Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.826948 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.827297 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="ceilometer-central-agent" containerID="cri-o://e925c9c4c7aa08744211c517c124058aade623d45fd2e02df2777b4f2df794b2" gracePeriod=30 Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.827678 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="proxy-httpd" containerID="cri-o://013c8995ca90639ba33078e18954ed308111d321639179db05cd00d19ef56702" gracePeriod=30 Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.827726 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="sg-core" containerID="cri-o://86ce4b4705307dac2aa00c6fc4314d927b33960ed30dc0799f79715a9adfcdf9" gracePeriod=30 Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.827761 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="ceilometer-notification-agent" containerID="cri-o://8581110fc2e8206867368b8c4ae7af28cb79d5341dbf6b92ea91def7d2e28eb6" gracePeriod=30 Mar 12 13:34:03 crc kubenswrapper[4778]: I0312 13:34:03.834089 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-678c76989b-8x56d"] Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.285616 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68092e68-04e5-4530-8d94-859789faeb94" path="/var/lib/kubelet/pods/68092e68-04e5-4530-8d94-859789faeb94/volumes" Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.286561 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac92f5c5-e457-4915-a919-0dbe3df23ce8" path="/var/lib/kubelet/pods/ac92f5c5-e457-4915-a919-0dbe3df23ce8/volumes" Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.287176 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34be903-da25-4cdb-9298-2d53fdce0276" path="/var/lib/kubelet/pods/e34be903-da25-4cdb-9298-2d53fdce0276/volumes" Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.763315 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81c1a05c-5642-43d4-8a7b-229330168332","Type":"ContainerStarted","Data":"84f7024aeceefe982e2cee4c0fa46027923ecf1b4ef431d7fe2e34b1d5e3f2e6"} Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.763375 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81c1a05c-5642-43d4-8a7b-229330168332","Type":"ContainerStarted","Data":"f128424220ef8224b8053627c5d48ddef6508a615314fe7e2b95951288127305"} Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.767464 4778 generic.go:334] "Generic (PLEG): container finished" podID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerID="013c8995ca90639ba33078e18954ed308111d321639179db05cd00d19ef56702" exitCode=0 Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.767492 4778 generic.go:334] "Generic (PLEG): container finished" podID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerID="86ce4b4705307dac2aa00c6fc4314d927b33960ed30dc0799f79715a9adfcdf9" exitCode=2 Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.767500 4778 generic.go:334] "Generic (PLEG): container finished" podID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerID="8581110fc2e8206867368b8c4ae7af28cb79d5341dbf6b92ea91def7d2e28eb6" exitCode=0 Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.767569 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerDied","Data":"013c8995ca90639ba33078e18954ed308111d321639179db05cd00d19ef56702"} Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.767671 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerDied","Data":"86ce4b4705307dac2aa00c6fc4314d927b33960ed30dc0799f79715a9adfcdf9"} Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.767692 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerDied","Data":"8581110fc2e8206867368b8c4ae7af28cb79d5341dbf6b92ea91def7d2e28eb6"} Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.773124 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.773165 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.820682 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 13:34:04 crc kubenswrapper[4778]: I0312 13:34:04.872382 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.090991 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555374-lf8vj" Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.209644 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9h9r\" (UniqueName: \"kubernetes.io/projected/9d627011-802e-4075-9c56-43373d4c368e-kube-api-access-r9h9r\") pod \"9d627011-802e-4075-9c56-43373d4c368e\" (UID: \"9d627011-802e-4075-9c56-43373d4c368e\") " Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.269550 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d627011-802e-4075-9c56-43373d4c368e-kube-api-access-r9h9r" (OuterVolumeSpecName: "kube-api-access-r9h9r") pod "9d627011-802e-4075-9c56-43373d4c368e" (UID: "9d627011-802e-4075-9c56-43373d4c368e"). InnerVolumeSpecName "kube-api-access-r9h9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.312483 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9h9r\" (UniqueName: \"kubernetes.io/projected/9d627011-802e-4075-9c56-43373d4c368e-kube-api-access-r9h9r\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.777088 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555374-lf8vj" event={"ID":"9d627011-802e-4075-9c56-43373d4c368e","Type":"ContainerDied","Data":"67e6d8b5b310afbd3ec49c3d001f4cd624f27618d4c040503f6dc47cb73ea130"} Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.777179 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67e6d8b5b310afbd3ec49c3d001f4cd624f27618d4c040503f6dc47cb73ea130" Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.777141 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555374-lf8vj" Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.784215 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"81c1a05c-5642-43d4-8a7b-229330168332","Type":"ContainerStarted","Data":"c5c75538574d506f02760e3d1dc542bf94ef25dccf27637b10ea3a8078431e9e"} Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.784676 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 13:34:05 crc kubenswrapper[4778]: I0312 13:34:05.784725 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 12 13:34:06 crc kubenswrapper[4778]: I0312 13:34:06.144169 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.144152669 podStartE2EDuration="4.144152669s" podCreationTimestamp="2026-03-12 13:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:34:05.808982752 +0000 UTC m=+1464.257678148" watchObservedRunningTime="2026-03-12 13:34:06.144152669 +0000 UTC m=+1464.592848065" Mar 12 13:34:06 crc kubenswrapper[4778]: I0312 13:34:06.180557 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555368-d2cpg"] Mar 12 13:34:06 crc kubenswrapper[4778]: I0312 13:34:06.188574 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555368-d2cpg"] Mar 12 13:34:06 crc kubenswrapper[4778]: I0312 13:34:06.266106 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d587ee-7b57-4b99-a800-c6d46322d799" path="/var/lib/kubelet/pods/20d587ee-7b57-4b99-a800-c6d46322d799/volumes" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.414650 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6b6mv"] Mar 12 13:34:07 crc kubenswrapper[4778]: E0312 13:34:07.415389 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d627011-802e-4075-9c56-43373d4c368e" containerName="oc" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.415402 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d627011-802e-4075-9c56-43373d4c368e" containerName="oc" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.415580 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d627011-802e-4075-9c56-43373d4c368e" containerName="oc" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.416292 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.418333 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.418737 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.418742 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bjjj5" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.431740 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6b6mv"] Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.446512 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q225s\" (UniqueName: \"kubernetes.io/projected/fe24691f-9019-44ec-85bf-b477c53f05ec-kube-api-access-q225s\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.446852 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-config-data\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.447138 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-scripts\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.447245 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.548705 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-scripts\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.549730 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.549859 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q225s\" (UniqueName: \"kubernetes.io/projected/fe24691f-9019-44ec-85bf-b477c53f05ec-kube-api-access-q225s\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.549943 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-config-data\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.555725 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-config-data\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.555738 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-scripts\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.556736 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.571687 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q225s\" (UniqueName: \"kubernetes.io/projected/fe24691f-9019-44ec-85bf-b477c53f05ec-kube-api-access-q225s\") pod \"nova-cell0-conductor-db-sync-6b6mv\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.733716 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.823975 4778 generic.go:334] "Generic (PLEG): container finished" podID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerID="e925c9c4c7aa08744211c517c124058aade623d45fd2e02df2777b4f2df794b2" exitCode=0 Mar 12 13:34:07 crc kubenswrapper[4778]: I0312 13:34:07.824027 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerDied","Data":"e925c9c4c7aa08744211c517c124058aade623d45fd2e02df2777b4f2df794b2"} Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.144490 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.160765 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-log-httpd\") pod \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.160874 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-combined-ca-bundle\") pod \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.160938 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-sg-core-conf-yaml\") pod \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.161004 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrpfb\" (UniqueName: \"kubernetes.io/projected/0bf8c182-c9d5-4011-b28c-c4f557a8071c-kube-api-access-nrpfb\") pod \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.161080 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-config-data\") pod \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.161157 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-scripts\") pod \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.161324 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-run-httpd\") pod \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\" (UID: \"0bf8c182-c9d5-4011-b28c-c4f557a8071c\") " Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.161337 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0bf8c182-c9d5-4011-b28c-c4f557a8071c" (UID: "0bf8c182-c9d5-4011-b28c-c4f557a8071c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.161859 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0bf8c182-c9d5-4011-b28c-c4f557a8071c" (UID: "0bf8c182-c9d5-4011-b28c-c4f557a8071c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.161881 4778 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.168859 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-scripts" (OuterVolumeSpecName: "scripts") pod "0bf8c182-c9d5-4011-b28c-c4f557a8071c" (UID: "0bf8c182-c9d5-4011-b28c-c4f557a8071c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.180919 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf8c182-c9d5-4011-b28c-c4f557a8071c-kube-api-access-nrpfb" (OuterVolumeSpecName: "kube-api-access-nrpfb") pod "0bf8c182-c9d5-4011-b28c-c4f557a8071c" (UID: "0bf8c182-c9d5-4011-b28c-c4f557a8071c"). InnerVolumeSpecName "kube-api-access-nrpfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.204481 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0bf8c182-c9d5-4011-b28c-c4f557a8071c" (UID: "0bf8c182-c9d5-4011-b28c-c4f557a8071c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.256586 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bf8c182-c9d5-4011-b28c-c4f557a8071c" (UID: "0bf8c182-c9d5-4011-b28c-c4f557a8071c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.264375 4778 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bf8c182-c9d5-4011-b28c-c4f557a8071c-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.264401 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.264411 4778 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.264437 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrpfb\" (UniqueName: \"kubernetes.io/projected/0bf8c182-c9d5-4011-b28c-c4f557a8071c-kube-api-access-nrpfb\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.264447 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.300091 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-config-data" (OuterVolumeSpecName: "config-data") pod "0bf8c182-c9d5-4011-b28c-c4f557a8071c" (UID: "0bf8c182-c9d5-4011-b28c-c4f557a8071c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:08 crc kubenswrapper[4778]: W0312 13:34:08.304567 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe24691f_9019_44ec_85bf_b477c53f05ec.slice/crio-28ffeba46951880404aa9c0e4e9f8643f9909fdcbeeecb541b1919d958482b53 WatchSource:0}: Error finding container 28ffeba46951880404aa9c0e4e9f8643f9909fdcbeeecb541b1919d958482b53: Status 404 returned error can't find the container with id 28ffeba46951880404aa9c0e4e9f8643f9909fdcbeeecb541b1919d958482b53 Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.366145 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf8c182-c9d5-4011-b28c-c4f557a8071c-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.380499 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6b6mv"] Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.380590 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.380659 4778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.389552 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.839609 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bf8c182-c9d5-4011-b28c-c4f557a8071c","Type":"ContainerDied","Data":"0f3aa121caf2c1a6a7f5f32c4c791af4c518cf20357d26f2062f2e017c408468"} Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.839697 4778 scope.go:117] "RemoveContainer" containerID="013c8995ca90639ba33078e18954ed308111d321639179db05cd00d19ef56702" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.840741 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.849949 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" event={"ID":"fe24691f-9019-44ec-85bf-b477c53f05ec","Type":"ContainerStarted","Data":"28ffeba46951880404aa9c0e4e9f8643f9909fdcbeeecb541b1919d958482b53"} Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.864540 4778 scope.go:117] "RemoveContainer" containerID="86ce4b4705307dac2aa00c6fc4314d927b33960ed30dc0799f79715a9adfcdf9" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.891714 4778 scope.go:117] "RemoveContainer" containerID="8581110fc2e8206867368b8c4ae7af28cb79d5341dbf6b92ea91def7d2e28eb6" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.902301 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.915013 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.929905 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930098 4778 scope.go:117] "RemoveContainer" containerID="e925c9c4c7aa08744211c517c124058aade623d45fd2e02df2777b4f2df794b2" Mar 12 13:34:08 crc kubenswrapper[4778]: E0312 13:34:08.930414 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="proxy-httpd" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930436 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="proxy-httpd" Mar 12 13:34:08 crc kubenswrapper[4778]: E0312 13:34:08.930466 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="ceilometer-notification-agent" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930476 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="ceilometer-notification-agent" Mar 12 13:34:08 crc kubenswrapper[4778]: E0312 13:34:08.930496 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="sg-core" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930504 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="sg-core" Mar 12 13:34:08 crc kubenswrapper[4778]: E0312 13:34:08.930527 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="ceilometer-central-agent" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930535 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="ceilometer-central-agent" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930741 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="proxy-httpd" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930757 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="ceilometer-notification-agent" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930768 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="ceilometer-central-agent" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.930790 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" containerName="sg-core" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.933443 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.936021 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.936180 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 13:34:08 crc kubenswrapper[4778]: I0312 13:34:08.942786 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.080663 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-scripts\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.080718 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-log-httpd\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.080750 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.080937 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9qb\" (UniqueName: \"kubernetes.io/projected/f7a2b0f7-9321-4f29-aa01-0acbc528f757-kube-api-access-ft9qb\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.081013 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-config-data\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.081060 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-run-httpd\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.081112 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.183065 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-config-data\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.183134 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-run-httpd\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.183178 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.183774 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-run-httpd\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.183864 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-scripts\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.184355 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-log-httpd\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.184891 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-log-httpd\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.184981 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.187243 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9qb\" (UniqueName: \"kubernetes.io/projected/f7a2b0f7-9321-4f29-aa01-0acbc528f757-kube-api-access-ft9qb\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.191231 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-config-data\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.191876 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.192577 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.193005 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-scripts\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.207827 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9qb\" (UniqueName: \"kubernetes.io/projected/f7a2b0f7-9321-4f29-aa01-0acbc528f757-kube-api-access-ft9qb\") pod \"ceilometer-0\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.256922 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.708628 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:09 crc kubenswrapper[4778]: I0312 13:34:09.862365 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerStarted","Data":"b0892c0b95fcd75a23c8ced61d0a214155429e8763fde0aacd038b1f9445ab5d"} Mar 12 13:34:10 crc kubenswrapper[4778]: I0312 13:34:10.266167 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf8c182-c9d5-4011-b28c-c4f557a8071c" path="/var/lib/kubelet/pods/0bf8c182-c9d5-4011-b28c-c4f557a8071c/volumes" Mar 12 13:34:10 crc kubenswrapper[4778]: I0312 13:34:10.877669 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerStarted","Data":"07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67"} Mar 12 13:34:11 crc kubenswrapper[4778]: I0312 13:34:11.888697 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerStarted","Data":"5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d"} Mar 12 13:34:13 crc kubenswrapper[4778]: I0312 13:34:13.235049 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 13:34:13 crc kubenswrapper[4778]: I0312 13:34:13.236625 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 12 13:34:13 crc kubenswrapper[4778]: I0312 13:34:13.281577 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 13:34:13 crc kubenswrapper[4778]: I0312 13:34:13.306545 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 12 13:34:13 crc kubenswrapper[4778]: I0312 13:34:13.909150 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 13:34:13 crc kubenswrapper[4778]: I0312 13:34:13.909210 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 12 13:34:15 crc kubenswrapper[4778]: I0312 13:34:15.845264 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 13:34:15 crc kubenswrapper[4778]: I0312 13:34:15.847854 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 12 13:34:16 crc kubenswrapper[4778]: I0312 13:34:16.938114 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerStarted","Data":"90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c"} Mar 12 13:34:16 crc kubenswrapper[4778]: I0312 13:34:16.941162 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" event={"ID":"fe24691f-9019-44ec-85bf-b477c53f05ec","Type":"ContainerStarted","Data":"e3b15e2b52f4e1dd648d1cbcdd4c757ead8e48ae1ed5c998744e64dfa8993e67"} Mar 12 13:34:16 crc kubenswrapper[4778]: I0312 13:34:16.971247 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" podStartSLOduration=2.26765603 podStartE2EDuration="9.971226618s" podCreationTimestamp="2026-03-12 13:34:07 +0000 UTC" firstStartedPulling="2026-03-12 13:34:08.308093222 +0000 UTC m=+1466.756788618" lastFinishedPulling="2026-03-12 13:34:16.01166381 +0000 UTC m=+1474.460359206" observedRunningTime="2026-03-12 13:34:16.969641053 +0000 UTC m=+1475.418336449" watchObservedRunningTime="2026-03-12 13:34:16.971226618 +0000 UTC m=+1475.419922014" Mar 12 13:34:18 crc kubenswrapper[4778]: I0312 13:34:18.962488 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerStarted","Data":"8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf"} Mar 12 13:34:18 crc kubenswrapper[4778]: I0312 13:34:18.962930 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 13:34:18 crc kubenswrapper[4778]: I0312 13:34:18.995326 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.206344647 podStartE2EDuration="10.994162761s" podCreationTimestamp="2026-03-12 13:34:08 +0000 UTC" firstStartedPulling="2026-03-12 13:34:09.726357941 +0000 UTC m=+1468.175053337" lastFinishedPulling="2026-03-12 13:34:18.514176055 +0000 UTC m=+1476.962871451" observedRunningTime="2026-03-12 13:34:18.983852428 +0000 UTC m=+1477.432547824" watchObservedRunningTime="2026-03-12 13:34:18.994162761 +0000 UTC m=+1477.442858157" Mar 12 13:34:29 crc kubenswrapper[4778]: I0312 13:34:29.070671 4778 generic.go:334] "Generic (PLEG): container finished" podID="fe24691f-9019-44ec-85bf-b477c53f05ec" containerID="e3b15e2b52f4e1dd648d1cbcdd4c757ead8e48ae1ed5c998744e64dfa8993e67" exitCode=0 Mar 12 13:34:29 crc kubenswrapper[4778]: I0312 13:34:29.071438 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" event={"ID":"fe24691f-9019-44ec-85bf-b477c53f05ec","Type":"ContainerDied","Data":"e3b15e2b52f4e1dd648d1cbcdd4c757ead8e48ae1ed5c998744e64dfa8993e67"} Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.416036 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.526751 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q225s\" (UniqueName: \"kubernetes.io/projected/fe24691f-9019-44ec-85bf-b477c53f05ec-kube-api-access-q225s\") pod \"fe24691f-9019-44ec-85bf-b477c53f05ec\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.526952 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-combined-ca-bundle\") pod \"fe24691f-9019-44ec-85bf-b477c53f05ec\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.527009 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-scripts\") pod \"fe24691f-9019-44ec-85bf-b477c53f05ec\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.527096 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-config-data\") pod \"fe24691f-9019-44ec-85bf-b477c53f05ec\" (UID: \"fe24691f-9019-44ec-85bf-b477c53f05ec\") " Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.536946 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-scripts" (OuterVolumeSpecName: "scripts") pod "fe24691f-9019-44ec-85bf-b477c53f05ec" (UID: "fe24691f-9019-44ec-85bf-b477c53f05ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.537061 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe24691f-9019-44ec-85bf-b477c53f05ec-kube-api-access-q225s" (OuterVolumeSpecName: "kube-api-access-q225s") pod "fe24691f-9019-44ec-85bf-b477c53f05ec" (UID: "fe24691f-9019-44ec-85bf-b477c53f05ec"). InnerVolumeSpecName "kube-api-access-q225s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.554750 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe24691f-9019-44ec-85bf-b477c53f05ec" (UID: "fe24691f-9019-44ec-85bf-b477c53f05ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.555924 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-config-data" (OuterVolumeSpecName: "config-data") pod "fe24691f-9019-44ec-85bf-b477c53f05ec" (UID: "fe24691f-9019-44ec-85bf-b477c53f05ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.632785 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.632835 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q225s\" (UniqueName: \"kubernetes.io/projected/fe24691f-9019-44ec-85bf-b477c53f05ec-kube-api-access-q225s\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.632851 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:30 crc kubenswrapper[4778]: I0312 13:34:30.632862 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe24691f-9019-44ec-85bf-b477c53f05ec-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.091580 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" event={"ID":"fe24691f-9019-44ec-85bf-b477c53f05ec","Type":"ContainerDied","Data":"28ffeba46951880404aa9c0e4e9f8643f9909fdcbeeecb541b1919d958482b53"} Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.091627 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28ffeba46951880404aa9c0e4e9f8643f9909fdcbeeecb541b1919d958482b53" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.091712 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6b6mv" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.195598 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:34:31 crc kubenswrapper[4778]: E0312 13:34:31.196157 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe24691f-9019-44ec-85bf-b477c53f05ec" containerName="nova-cell0-conductor-db-sync" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.196204 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe24691f-9019-44ec-85bf-b477c53f05ec" containerName="nova-cell0-conductor-db-sync" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.196541 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe24691f-9019-44ec-85bf-b477c53f05ec" containerName="nova-cell0-conductor-db-sync" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.197385 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.200066 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bjjj5" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.200326 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.207936 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.349448 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.351785 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.352879 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xht95\" (UniqueName: \"kubernetes.io/projected/7733a48b-2bc4-4372-a222-37bb8ea04b6d-kube-api-access-xht95\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.457797 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xht95\" (UniqueName: \"kubernetes.io/projected/7733a48b-2bc4-4372-a222-37bb8ea04b6d-kube-api-access-xht95\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.457876 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.458006 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.469475 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.469832 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.474828 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xht95\" (UniqueName: \"kubernetes.io/projected/7733a48b-2bc4-4372-a222-37bb8ea04b6d-kube-api-access-xht95\") pod \"nova-cell0-conductor-0\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:31 crc kubenswrapper[4778]: I0312 13:34:31.516165 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:32 crc kubenswrapper[4778]: I0312 13:34:32.021475 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:34:32 crc kubenswrapper[4778]: I0312 13:34:32.101327 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7733a48b-2bc4-4372-a222-37bb8ea04b6d","Type":"ContainerStarted","Data":"be21932167b4499354351c16537055f7625655a6cd039664d1dd7fad790b8909"} Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.120039 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7733a48b-2bc4-4372-a222-37bb8ea04b6d","Type":"ContainerStarted","Data":"7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731"} Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.120518 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.151384 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.151361701 podStartE2EDuration="2.151361701s" podCreationTimestamp="2026-03-12 13:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:34:33.144750353 +0000 UTC m=+1491.593445769" watchObservedRunningTime="2026-03-12 13:34:33.151361701 +0000 UTC m=+1491.600057097" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.312532 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jfzqk"] Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.314444 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.323591 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfzqk"] Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.401723 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-utilities\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.401786 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-catalog-content\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.402340 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrb7\" (UniqueName: \"kubernetes.io/projected/1b7d48c4-04cd-481a-976d-19e57a28a1d9-kube-api-access-frrb7\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.505147 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-utilities\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.505212 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-catalog-content\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.505297 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrb7\" (UniqueName: \"kubernetes.io/projected/1b7d48c4-04cd-481a-976d-19e57a28a1d9-kube-api-access-frrb7\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.505803 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-utilities\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.505832 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-catalog-content\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.525340 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrb7\" (UniqueName: \"kubernetes.io/projected/1b7d48c4-04cd-481a-976d-19e57a28a1d9-kube-api-access-frrb7\") pod \"redhat-operators-jfzqk\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:33 crc kubenswrapper[4778]: I0312 13:34:33.642005 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:34 crc kubenswrapper[4778]: I0312 13:34:34.156016 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfzqk"] Mar 12 13:34:34 crc kubenswrapper[4778]: W0312 13:34:34.161855 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b7d48c4_04cd_481a_976d_19e57a28a1d9.slice/crio-68397e437c2fb3791ad659ab6abc466e5cb77e5b97a5ba4bc1bb524e525fb6c3 WatchSource:0}: Error finding container 68397e437c2fb3791ad659ab6abc466e5cb77e5b97a5ba4bc1bb524e525fb6c3: Status 404 returned error can't find the container with id 68397e437c2fb3791ad659ab6abc466e5cb77e5b97a5ba4bc1bb524e525fb6c3 Mar 12 13:34:35 crc kubenswrapper[4778]: I0312 13:34:35.139839 4778 generic.go:334] "Generic (PLEG): container finished" podID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerID="1ca532aa466af7c68cb8aa187e7cf3ea161e9610dcf97d902b18dad6b9250f81" exitCode=0 Mar 12 13:34:35 crc kubenswrapper[4778]: I0312 13:34:35.140562 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfzqk" event={"ID":"1b7d48c4-04cd-481a-976d-19e57a28a1d9","Type":"ContainerDied","Data":"1ca532aa466af7c68cb8aa187e7cf3ea161e9610dcf97d902b18dad6b9250f81"} Mar 12 13:34:35 crc kubenswrapper[4778]: I0312 13:34:35.140594 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfzqk" event={"ID":"1b7d48c4-04cd-481a-976d-19e57a28a1d9","Type":"ContainerStarted","Data":"68397e437c2fb3791ad659ab6abc466e5cb77e5b97a5ba4bc1bb524e525fb6c3"} Mar 12 13:34:37 crc kubenswrapper[4778]: I0312 13:34:37.168522 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfzqk" event={"ID":"1b7d48c4-04cd-481a-976d-19e57a28a1d9","Type":"ContainerStarted","Data":"c66167331bd74d9b577eb48b304f2f99e28d6904a5ae9cd088d4f17df80842e1"} Mar 12 13:34:38 crc kubenswrapper[4778]: I0312 13:34:38.177580 4778 generic.go:334] "Generic (PLEG): container finished" podID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerID="c66167331bd74d9b577eb48b304f2f99e28d6904a5ae9cd088d4f17df80842e1" exitCode=0 Mar 12 13:34:38 crc kubenswrapper[4778]: I0312 13:34:38.177682 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfzqk" event={"ID":"1b7d48c4-04cd-481a-976d-19e57a28a1d9","Type":"ContainerDied","Data":"c66167331bd74d9b577eb48b304f2f99e28d6904a5ae9cd088d4f17df80842e1"} Mar 12 13:34:39 crc kubenswrapper[4778]: I0312 13:34:39.263503 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 13:34:40 crc kubenswrapper[4778]: I0312 13:34:40.198768 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfzqk" event={"ID":"1b7d48c4-04cd-481a-976d-19e57a28a1d9","Type":"ContainerStarted","Data":"8dce37445b314b16965ae024d78bbfd9bf5998d5da6305572acf12733671bc3d"} Mar 12 13:34:40 crc kubenswrapper[4778]: I0312 13:34:40.223984 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jfzqk" podStartSLOduration=3.024149769 podStartE2EDuration="7.22395417s" podCreationTimestamp="2026-03-12 13:34:33 +0000 UTC" firstStartedPulling="2026-03-12 13:34:35.142746347 +0000 UTC m=+1493.591441743" lastFinishedPulling="2026-03-12 13:34:39.342550748 +0000 UTC m=+1497.791246144" observedRunningTime="2026-03-12 13:34:40.220747999 +0000 UTC m=+1498.669443405" watchObservedRunningTime="2026-03-12 13:34:40.22395417 +0000 UTC m=+1498.672649596" Mar 12 13:34:41 crc kubenswrapper[4778]: I0312 13:34:41.562554 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.066781 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qqx6r"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.068749 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.072864 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.074654 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.098868 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qqx6r"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.190778 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpk2x\" (UniqueName: \"kubernetes.io/projected/98a74774-1415-43d1-b278-bead87ab4385-kube-api-access-zpk2x\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.190861 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.192144 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-config-data\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.192205 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-scripts\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.326333 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpk2x\" (UniqueName: \"kubernetes.io/projected/98a74774-1415-43d1-b278-bead87ab4385-kube-api-access-zpk2x\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.326670 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.327207 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-config-data\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.327253 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-scripts\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.344883 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.345101 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.386356 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.397835 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-config-data\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.400053 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpk2x\" (UniqueName: \"kubernetes.io/projected/98a74774-1415-43d1-b278-bead87ab4385-kube-api-access-zpk2x\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.410413 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-scripts\") pod \"nova-cell0-cell-mapping-qqx6r\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.412999 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.487471 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.488640 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.508597 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.517039 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.557550 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.558749 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.565683 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.577822 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.617319 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.619306 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.628564 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.639732 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.669488 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-config-data\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.669589 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv5rr\" (UniqueName: \"kubernetes.io/projected/d42d33e8-c530-4272-90a4-f0ef9b061927-kube-api-access-hv5rr\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.669622 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6cq\" (UniqueName: \"kubernetes.io/projected/65ad5500-0148-42d4-a597-53e265081516-kube-api-access-ps6cq\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.669639 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.669737 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.669762 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.769218 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.771267 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.779645 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783283 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783337 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-logs\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783363 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783407 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-config-data\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783437 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsj8q\" (UniqueName: \"kubernetes.io/projected/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-kube-api-access-qsj8q\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783473 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv5rr\" (UniqueName: \"kubernetes.io/projected/d42d33e8-c530-4272-90a4-f0ef9b061927-kube-api-access-hv5rr\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783495 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783516 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6cq\" (UniqueName: \"kubernetes.io/projected/65ad5500-0148-42d4-a597-53e265081516-kube-api-access-ps6cq\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783535 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.783551 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-config-data\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.802023 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.809875 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-config-data\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.810398 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.832034 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.849317 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.855849 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv5rr\" (UniqueName: \"kubernetes.io/projected/d42d33e8-c530-4272-90a4-f0ef9b061927-kube-api-access-hv5rr\") pod \"nova-cell1-novncproxy-0\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.884827 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6cq\" (UniqueName: \"kubernetes.io/projected/65ad5500-0148-42d4-a597-53e265081516-kube-api-access-ps6cq\") pod \"nova-scheduler-0\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " pod="openstack/nova-scheduler-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.887317 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.887358 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.887377 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-config-data\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.887393 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-config-data\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.887466 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-logs\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.887506 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6nx\" (UniqueName: \"kubernetes.io/projected/9eb26444-57b1-444a-ab45-586a64cd8857-kube-api-access-qx6nx\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.887541 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsj8q\" (UniqueName: \"kubernetes.io/projected/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-kube-api-access-qsj8q\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.887568 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb26444-57b1-444a-ab45-586a64cd8857-logs\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.890402 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.890794 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-logs\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.897993 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.931221 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-config-data\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.940986 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsj8q\" (UniqueName: \"kubernetes.io/projected/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-kube-api-access-qsj8q\") pod \"nova-api-0\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " pod="openstack/nova-api-0" Mar 12 13:34:42 crc kubenswrapper[4778]: I0312 13:34:42.999670 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.001058 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb26444-57b1-444a-ab45-586a64cd8857-logs\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.001113 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.001135 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-config-data\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.001243 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6nx\" (UniqueName: \"kubernetes.io/projected/9eb26444-57b1-444a-ab45-586a64cd8857-kube-api-access-qx6nx\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.001858 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb26444-57b1-444a-ab45-586a64cd8857-logs\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.008669 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.022251 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xlfr7"] Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.023758 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.039372 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xlfr7"] Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.051939 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-config-data\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.055652 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6nx\" (UniqueName: \"kubernetes.io/projected/9eb26444-57b1-444a-ab45-586a64cd8857-kube-api-access-qx6nx\") pod \"nova-metadata-0\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.104958 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qqx6r"] Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.176307 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.197123 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.203658 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.203697 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fvsq\" (UniqueName: \"kubernetes.io/projected/f38c0efe-db9f-4afc-8693-0743c558d74f-kube-api-access-6fvsq\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.203716 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.203826 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-config\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.203851 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.203898 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.250431 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qqx6r" event={"ID":"98a74774-1415-43d1-b278-bead87ab4385","Type":"ContainerStarted","Data":"94415432161b66df8eaee31803fa6f28ac270cbac75c8be9dbd68e3fe9cda71c"} Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.309435 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.309491 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fvsq\" (UniqueName: \"kubernetes.io/projected/f38c0efe-db9f-4afc-8693-0743c558d74f-kube-api-access-6fvsq\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.309511 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.309578 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-config\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.309610 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.309663 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.310863 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.310870 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.311447 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-config\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.312014 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.312150 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.333235 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fvsq\" (UniqueName: \"kubernetes.io/projected/f38c0efe-db9f-4afc-8693-0743c558d74f-kube-api-access-6fvsq\") pod \"dnsmasq-dns-757b4f8459-xlfr7\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.361808 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.542101 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.643872 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.644751 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.653703 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.777562 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.824078 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7dlt6"] Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.825420 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.828212 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.829170 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.837046 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7dlt6"] Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.932365 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqndb\" (UniqueName: \"kubernetes.io/projected/58dfb2fb-928e-46de-90dd-481c91a7727c-kube-api-access-gqndb\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.932410 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-config-data\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.932445 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:43 crc kubenswrapper[4778]: I0312 13:34:43.932495 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-scripts\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.033990 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqndb\" (UniqueName: \"kubernetes.io/projected/58dfb2fb-928e-46de-90dd-481c91a7727c-kube-api-access-gqndb\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.034038 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-config-data\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.034071 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.034122 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-scripts\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.040554 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-config-data\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.092124 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.093599 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-scripts\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.095308 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqndb\" (UniqueName: \"kubernetes.io/projected/58dfb2fb-928e-46de-90dd-481c91a7727c-kube-api-access-gqndb\") pod \"nova-cell1-conductor-db-sync-7dlt6\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.142238 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.160017 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.161112 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xlfr7"] Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.282796 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" event={"ID":"f38c0efe-db9f-4afc-8693-0743c558d74f","Type":"ContainerStarted","Data":"a01a33797f0031a4928ccc3b84c316e6cab0e859fc2dd6c0bc9cf5a06332acbb"} Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.290959 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d42d33e8-c530-4272-90a4-f0ef9b061927","Type":"ContainerStarted","Data":"190ba154912f1afd6c8afdd589f19abb7d2fb48d3910a0516eb35d087148f5e4"} Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.298615 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qqx6r" event={"ID":"98a74774-1415-43d1-b278-bead87ab4385","Type":"ContainerStarted","Data":"638395848d77320f6f4d74ca6334a62beda4c18b92408c089881a124597a1418"} Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.305731 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1","Type":"ContainerStarted","Data":"9d69b802526d361f0ba3ab145439034eaadd80c18ca540ff35313a518907cc83"} Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.308421 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb26444-57b1-444a-ab45-586a64cd8857","Type":"ContainerStarted","Data":"a723cfd0bedc0eb390903faec62e31e74919acbbe92cf204fd8b296e7d06b3bb"} Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.315538 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65ad5500-0148-42d4-a597-53e265081516","Type":"ContainerStarted","Data":"53204891ba93c9dcc714e4cf6732ebcf66cfe563b2c5b0d6b993dd7bb498dfcd"} Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.330348 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qqx6r" podStartSLOduration=2.330324731 podStartE2EDuration="2.330324731s" podCreationTimestamp="2026-03-12 13:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:34:44.315569773 +0000 UTC m=+1502.764265159" watchObservedRunningTime="2026-03-12 13:34:44.330324731 +0000 UTC m=+1502.779020127" Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.707662 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jfzqk" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="registry-server" probeResult="failure" output=< Mar 12 13:34:44 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:34:44 crc kubenswrapper[4778]: > Mar 12 13:34:44 crc kubenswrapper[4778]: I0312 13:34:44.798297 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7dlt6"] Mar 12 13:34:45 crc kubenswrapper[4778]: I0312 13:34:45.346997 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" event={"ID":"58dfb2fb-928e-46de-90dd-481c91a7727c","Type":"ContainerStarted","Data":"a7c208f5185dc692f0ec8df98f6bb0b7b464e0a056d454057e864768b033e299"} Mar 12 13:34:45 crc kubenswrapper[4778]: I0312 13:34:45.347295 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" event={"ID":"58dfb2fb-928e-46de-90dd-481c91a7727c","Type":"ContainerStarted","Data":"6616a6464fcc9dedf4bc63acdc82e9b9e7114af17dd2023df257ae235015b89a"} Mar 12 13:34:45 crc kubenswrapper[4778]: I0312 13:34:45.355657 4778 generic.go:334] "Generic (PLEG): container finished" podID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerID="1e4e2a2aac1ba95c2fc03d3ae5822d197e179d60f0dbd976d4f6143a68eb2c2a" exitCode=0 Mar 12 13:34:45 crc kubenswrapper[4778]: I0312 13:34:45.355743 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" event={"ID":"f38c0efe-db9f-4afc-8693-0743c558d74f","Type":"ContainerDied","Data":"1e4e2a2aac1ba95c2fc03d3ae5822d197e179d60f0dbd976d4f6143a68eb2c2a"} Mar 12 13:34:45 crc kubenswrapper[4778]: I0312 13:34:45.401486 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" podStartSLOduration=2.401465444 podStartE2EDuration="2.401465444s" podCreationTimestamp="2026-03-12 13:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:34:45.378866503 +0000 UTC m=+1503.827561899" watchObservedRunningTime="2026-03-12 13:34:45.401465444 +0000 UTC m=+1503.850160840" Mar 12 13:34:46 crc kubenswrapper[4778]: I0312 13:34:46.681048 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:34:46 crc kubenswrapper[4778]: I0312 13:34:46.694571 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:47 crc kubenswrapper[4778]: I0312 13:34:47.871701 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:34:47 crc kubenswrapper[4778]: I0312 13:34:47.872985 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="66ed2760-88a0-4731-a0d1-52cb6cffa2b1" containerName="kube-state-metrics" containerID="cri-o://6addcbc9f6e1bd0c36c2127749a9343943bce9503688868083bfb8596a8eda94" gracePeriod=30 Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.403215 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" event={"ID":"f38c0efe-db9f-4afc-8693-0743c558d74f","Type":"ContainerStarted","Data":"e6738e925b347d28a1e722ea04cdc7d88018005b75c56a3dec09b214b5752ae1"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.404136 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.409711 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d42d33e8-c530-4272-90a4-f0ef9b061927","Type":"ContainerStarted","Data":"2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.410150 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d42d33e8-c530-4272-90a4-f0ef9b061927" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af" gracePeriod=30 Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.418283 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1","Type":"ContainerStarted","Data":"e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.418378 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1","Type":"ContainerStarted","Data":"8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.425152 4778 generic.go:334] "Generic (PLEG): container finished" podID="66ed2760-88a0-4731-a0d1-52cb6cffa2b1" containerID="6addcbc9f6e1bd0c36c2127749a9343943bce9503688868083bfb8596a8eda94" exitCode=2 Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.425240 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"66ed2760-88a0-4731-a0d1-52cb6cffa2b1","Type":"ContainerDied","Data":"6addcbc9f6e1bd0c36c2127749a9343943bce9503688868083bfb8596a8eda94"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.425263 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"66ed2760-88a0-4731-a0d1-52cb6cffa2b1","Type":"ContainerDied","Data":"4e9e8b87b4e8662cb5ee7f6527d7533b6383b322442ecf5f3470e33d6bb4be86"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.425293 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9e8b87b4e8662cb5ee7f6527d7533b6383b322442ecf5f3470e33d6bb4be86" Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.429861 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb26444-57b1-444a-ab45-586a64cd8857","Type":"ContainerStarted","Data":"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.429907 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb26444-57b1-444a-ab45-586a64cd8857","Type":"ContainerStarted","Data":"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.430079 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" containerName="nova-metadata-log" containerID="cri-o://f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a" gracePeriod=30 Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.430562 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" containerName="nova-metadata-metadata" containerID="cri-o://38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9" gracePeriod=30 Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.441797 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65ad5500-0148-42d4-a597-53e265081516","Type":"ContainerStarted","Data":"4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1"} Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.451427 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" podStartSLOduration=6.451407979 podStartE2EDuration="6.451407979s" podCreationTimestamp="2026-03-12 13:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:34:48.430133286 +0000 UTC m=+1506.878828682" watchObservedRunningTime="2026-03-12 13:34:48.451407979 +0000 UTC m=+1506.900103375" Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.464635 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.477991 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.989076648 podStartE2EDuration="6.477968033s" podCreationTimestamp="2026-03-12 13:34:42 +0000 UTC" firstStartedPulling="2026-03-12 13:34:43.666146141 +0000 UTC m=+1502.114841537" lastFinishedPulling="2026-03-12 13:34:47.155037526 +0000 UTC m=+1505.603732922" observedRunningTime="2026-03-12 13:34:48.454076075 +0000 UTC m=+1506.902771471" watchObservedRunningTime="2026-03-12 13:34:48.477968033 +0000 UTC m=+1506.926663429" Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.483907 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.121818192 podStartE2EDuration="6.48388871s" podCreationTimestamp="2026-03-12 13:34:42 +0000 UTC" firstStartedPulling="2026-03-12 13:34:43.79795361 +0000 UTC m=+1502.246648996" lastFinishedPulling="2026-03-12 13:34:47.160024118 +0000 UTC m=+1505.608719514" observedRunningTime="2026-03-12 13:34:48.474052212 +0000 UTC m=+1506.922747608" watchObservedRunningTime="2026-03-12 13:34:48.48388871 +0000 UTC m=+1506.932584106" Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.500358 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.904665092 podStartE2EDuration="6.500336247s" podCreationTimestamp="2026-03-12 13:34:42 +0000 UTC" firstStartedPulling="2026-03-12 13:34:43.560546755 +0000 UTC m=+1502.009242141" lastFinishedPulling="2026-03-12 13:34:47.1562179 +0000 UTC m=+1505.604913296" observedRunningTime="2026-03-12 13:34:48.494250254 +0000 UTC m=+1506.942945650" watchObservedRunningTime="2026-03-12 13:34:48.500336247 +0000 UTC m=+1506.949031643" Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.576557 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8s49\" (UniqueName: \"kubernetes.io/projected/66ed2760-88a0-4731-a0d1-52cb6cffa2b1-kube-api-access-m8s49\") pod \"66ed2760-88a0-4731-a0d1-52cb6cffa2b1\" (UID: \"66ed2760-88a0-4731-a0d1-52cb6cffa2b1\") " Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.593130 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ed2760-88a0-4731-a0d1-52cb6cffa2b1-kube-api-access-m8s49" (OuterVolumeSpecName: "kube-api-access-m8s49") pod "66ed2760-88a0-4731-a0d1-52cb6cffa2b1" (UID: "66ed2760-88a0-4731-a0d1-52cb6cffa2b1"). InnerVolumeSpecName "kube-api-access-m8s49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:48 crc kubenswrapper[4778]: I0312 13:34:48.680489 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8s49\" (UniqueName: \"kubernetes.io/projected/66ed2760-88a0-4731-a0d1-52cb6cffa2b1-kube-api-access-m8s49\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.026321 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.051989 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.033122702 podStartE2EDuration="7.051969014s" podCreationTimestamp="2026-03-12 13:34:42 +0000 UTC" firstStartedPulling="2026-03-12 13:34:44.142954386 +0000 UTC m=+1502.591649782" lastFinishedPulling="2026-03-12 13:34:47.161800698 +0000 UTC m=+1505.610496094" observedRunningTime="2026-03-12 13:34:48.534511456 +0000 UTC m=+1506.983206852" watchObservedRunningTime="2026-03-12 13:34:49.051969014 +0000 UTC m=+1507.500664410" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.190409 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-combined-ca-bundle\") pod \"9eb26444-57b1-444a-ab45-586a64cd8857\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.191096 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb26444-57b1-444a-ab45-586a64cd8857-logs\") pod \"9eb26444-57b1-444a-ab45-586a64cd8857\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.191206 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx6nx\" (UniqueName: \"kubernetes.io/projected/9eb26444-57b1-444a-ab45-586a64cd8857-kube-api-access-qx6nx\") pod \"9eb26444-57b1-444a-ab45-586a64cd8857\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.191358 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-config-data\") pod \"9eb26444-57b1-444a-ab45-586a64cd8857\" (UID: \"9eb26444-57b1-444a-ab45-586a64cd8857\") " Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.191482 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb26444-57b1-444a-ab45-586a64cd8857-logs" (OuterVolumeSpecName: "logs") pod "9eb26444-57b1-444a-ab45-586a64cd8857" (UID: "9eb26444-57b1-444a-ab45-586a64cd8857"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.192080 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb26444-57b1-444a-ab45-586a64cd8857-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.197825 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb26444-57b1-444a-ab45-586a64cd8857-kube-api-access-qx6nx" (OuterVolumeSpecName: "kube-api-access-qx6nx") pod "9eb26444-57b1-444a-ab45-586a64cd8857" (UID: "9eb26444-57b1-444a-ab45-586a64cd8857"). InnerVolumeSpecName "kube-api-access-qx6nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.222329 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb26444-57b1-444a-ab45-586a64cd8857" (UID: "9eb26444-57b1-444a-ab45-586a64cd8857"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.238474 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-config-data" (OuterVolumeSpecName: "config-data") pod "9eb26444-57b1-444a-ab45-586a64cd8857" (UID: "9eb26444-57b1-444a-ab45-586a64cd8857"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.294001 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx6nx\" (UniqueName: \"kubernetes.io/projected/9eb26444-57b1-444a-ab45-586a64cd8857-kube-api-access-qx6nx\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.294042 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.294058 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb26444-57b1-444a-ab45-586a64cd8857-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.458061 4778 generic.go:334] "Generic (PLEG): container finished" podID="9eb26444-57b1-444a-ab45-586a64cd8857" containerID="38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9" exitCode=0 Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.458100 4778 generic.go:334] "Generic (PLEG): container finished" podID="9eb26444-57b1-444a-ab45-586a64cd8857" containerID="f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a" exitCode=143 Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.458167 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.458349 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.458398 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb26444-57b1-444a-ab45-586a64cd8857","Type":"ContainerDied","Data":"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9"} Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.458458 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb26444-57b1-444a-ab45-586a64cd8857","Type":"ContainerDied","Data":"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a"} Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.458473 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb26444-57b1-444a-ab45-586a64cd8857","Type":"ContainerDied","Data":"a723cfd0bedc0eb390903faec62e31e74919acbbe92cf204fd8b296e7d06b3bb"} Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.458495 4778 scope.go:117] "RemoveContainer" containerID="38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.502541 4778 scope.go:117] "RemoveContainer" containerID="f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.525168 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.531948 4778 scope.go:117] "RemoveContainer" containerID="38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9" Mar 12 13:34:49 crc kubenswrapper[4778]: E0312 13:34:49.538419 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9\": container with ID starting with 38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9 not found: ID does not exist" containerID="38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.538480 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9"} err="failed to get container status \"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9\": rpc error: code = NotFound desc = could not find container \"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9\": container with ID starting with 38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9 not found: ID does not exist" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.538518 4778 scope.go:117] "RemoveContainer" containerID="f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a" Mar 12 13:34:49 crc kubenswrapper[4778]: E0312 13:34:49.544717 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a\": container with ID starting with f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a not found: ID does not exist" containerID="f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.544783 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a"} err="failed to get container status \"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a\": rpc error: code = NotFound desc = could not find container \"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a\": container with ID starting with f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a not found: ID does not exist" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.544820 4778 scope.go:117] "RemoveContainer" containerID="38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.545234 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9"} err="failed to get container status \"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9\": rpc error: code = NotFound desc = could not find container \"38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9\": container with ID starting with 38fa7ed3342fe5fa41b70791a1955f377980bfa175801ba31d13d4c89b9c16d9 not found: ID does not exist" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.545256 4778 scope.go:117] "RemoveContainer" containerID="f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.545501 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a"} err="failed to get container status \"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a\": rpc error: code = NotFound desc = could not find container \"f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a\": container with ID starting with f9211b5ee91843df422010d0e1b0d25d76aa11968301fe1adb494610b728184a not found: ID does not exist" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.550552 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.570208 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.586321 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.593692 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:34:49 crc kubenswrapper[4778]: E0312 13:34:49.594205 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ed2760-88a0-4731-a0d1-52cb6cffa2b1" containerName="kube-state-metrics" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.594226 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ed2760-88a0-4731-a0d1-52cb6cffa2b1" containerName="kube-state-metrics" Mar 12 13:34:49 crc kubenswrapper[4778]: E0312 13:34:49.594239 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" containerName="nova-metadata-log" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.594247 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" containerName="nova-metadata-log" Mar 12 13:34:49 crc kubenswrapper[4778]: E0312 13:34:49.594264 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" containerName="nova-metadata-metadata" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.594270 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" containerName="nova-metadata-metadata" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.594501 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" containerName="nova-metadata-log" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.594525 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" containerName="nova-metadata-metadata" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.594550 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ed2760-88a0-4731-a0d1-52cb6cffa2b1" containerName="kube-state-metrics" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.595420 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.598683 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.598712 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.602777 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.605302 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.610150 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.610660 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.618453 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.631860 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702542 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702606 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppmh\" (UniqueName: \"kubernetes.io/projected/51f24fcd-aff5-4785-abf7-4936180cee78-kube-api-access-gppmh\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702630 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702663 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8c2c79-f773-4580-ad43-3dcbfced2f86-logs\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702712 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702730 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702756 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702785 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-config-data\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.702813 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpvk\" (UniqueName: \"kubernetes.io/projected/cf8c2c79-f773-4580-ad43-3dcbfced2f86-kube-api-access-pmpvk\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.806395 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.806796 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-config-data\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.806859 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpvk\" (UniqueName: \"kubernetes.io/projected/cf8c2c79-f773-4580-ad43-3dcbfced2f86-kube-api-access-pmpvk\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.806945 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.806991 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppmh\" (UniqueName: \"kubernetes.io/projected/51f24fcd-aff5-4785-abf7-4936180cee78-kube-api-access-gppmh\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.807020 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.807068 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8c2c79-f773-4580-ad43-3dcbfced2f86-logs\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.807139 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.807165 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.808755 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8c2c79-f773-4580-ad43-3dcbfced2f86-logs\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.814599 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.815745 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.816121 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-config-data\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.820828 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.837538 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.837968 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f24fcd-aff5-4785-abf7-4936180cee78-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.842111 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppmh\" (UniqueName: \"kubernetes.io/projected/51f24fcd-aff5-4785-abf7-4936180cee78-kube-api-access-gppmh\") pod \"kube-state-metrics-0\" (UID: \"51f24fcd-aff5-4785-abf7-4936180cee78\") " pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.842654 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpvk\" (UniqueName: \"kubernetes.io/projected/cf8c2c79-f773-4580-ad43-3dcbfced2f86-kube-api-access-pmpvk\") pod \"nova-metadata-0\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " pod="openstack/nova-metadata-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.925139 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 12 13:34:49 crc kubenswrapper[4778]: I0312 13:34:49.937305 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.270419 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ed2760-88a0-4731-a0d1-52cb6cffa2b1" path="/var/lib/kubelet/pods/66ed2760-88a0-4731-a0d1-52cb6cffa2b1/volumes" Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.271662 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb26444-57b1-444a-ab45-586a64cd8857" path="/var/lib/kubelet/pods/9eb26444-57b1-444a-ab45-586a64cd8857/volumes" Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.392018 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.392360 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="ceilometer-central-agent" containerID="cri-o://07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67" gracePeriod=30 Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.392483 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="sg-core" containerID="cri-o://90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c" gracePeriod=30 Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.392547 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="proxy-httpd" containerID="cri-o://8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf" gracePeriod=30 Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.392581 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="ceilometer-notification-agent" containerID="cri-o://5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d" gracePeriod=30 Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.435526 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.470609 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"51f24fcd-aff5-4785-abf7-4936180cee78","Type":"ContainerStarted","Data":"430dedef3bc6e9b116b7b20f0e7104500525c4e510ad2304e79eb76ac3410d4f"} Mar 12 13:34:50 crc kubenswrapper[4778]: I0312 13:34:50.512151 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.484489 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8c2c79-f773-4580-ad43-3dcbfced2f86","Type":"ContainerStarted","Data":"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e"} Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.485003 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8c2c79-f773-4580-ad43-3dcbfced2f86","Type":"ContainerStarted","Data":"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db"} Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.485023 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8c2c79-f773-4580-ad43-3dcbfced2f86","Type":"ContainerStarted","Data":"4ec39d8d5b0fe3157074d759d6c8d58bd3fe2afde569fbe34129a0aeb9260cbc"} Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.487915 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"51f24fcd-aff5-4785-abf7-4936180cee78","Type":"ContainerStarted","Data":"52f48406d3459ce07c9d5861f0254b9fe6e02c6ebb107cd5294d3822b58e596f"} Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.488089 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.494702 4778 generic.go:334] "Generic (PLEG): container finished" podID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerID="8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf" exitCode=0 Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.494734 4778 generic.go:334] "Generic (PLEG): container finished" podID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerID="90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c" exitCode=2 Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.494743 4778 generic.go:334] "Generic (PLEG): container finished" podID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerID="07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67" exitCode=0 Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.494767 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerDied","Data":"8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf"} Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.494792 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerDied","Data":"90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c"} Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.494802 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerDied","Data":"07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67"} Mar 12 13:34:51 crc kubenswrapper[4778]: I0312 13:34:51.513305 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.513288372 podStartE2EDuration="2.513288372s" podCreationTimestamp="2026-03-12 13:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:34:51.504875763 +0000 UTC m=+1509.953571159" watchObservedRunningTime="2026-03-12 13:34:51.513288372 +0000 UTC m=+1509.961983768" Mar 12 13:34:52 crc kubenswrapper[4778]: I0312 13:34:52.503958 4778 generic.go:334] "Generic (PLEG): container finished" podID="98a74774-1415-43d1-b278-bead87ab4385" containerID="638395848d77320f6f4d74ca6334a62beda4c18b92408c089881a124597a1418" exitCode=0 Mar 12 13:34:52 crc kubenswrapper[4778]: I0312 13:34:52.504034 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qqx6r" event={"ID":"98a74774-1415-43d1-b278-bead87ab4385","Type":"ContainerDied","Data":"638395848d77320f6f4d74ca6334a62beda4c18b92408c089881a124597a1418"} Mar 12 13:34:52 crc kubenswrapper[4778]: I0312 13:34:52.529080 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.94125185 podStartE2EDuration="3.529054314s" podCreationTimestamp="2026-03-12 13:34:49 +0000 UTC" firstStartedPulling="2026-03-12 13:34:50.454529029 +0000 UTC m=+1508.903224425" lastFinishedPulling="2026-03-12 13:34:51.042331493 +0000 UTC m=+1509.491026889" observedRunningTime="2026-03-12 13:34:51.533773063 +0000 UTC m=+1509.982468459" watchObservedRunningTime="2026-03-12 13:34:52.529054314 +0000 UTC m=+1510.977749710" Mar 12 13:34:52 crc kubenswrapper[4778]: I0312 13:34:52.891497 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.000864 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.000921 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.176446 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.176493 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.213626 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.364417 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.434173 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-246x7"] Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.434438 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" podUID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" containerName="dnsmasq-dns" containerID="cri-o://3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549" gracePeriod=10 Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.519589 4778 generic.go:334] "Generic (PLEG): container finished" podID="58dfb2fb-928e-46de-90dd-481c91a7727c" containerID="a7c208f5185dc692f0ec8df98f6bb0b7b464e0a056d454057e864768b033e299" exitCode=0 Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.519784 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" event={"ID":"58dfb2fb-928e-46de-90dd-481c91a7727c","Type":"ContainerDied","Data":"a7c208f5185dc692f0ec8df98f6bb0b7b464e0a056d454057e864768b033e299"} Mar 12 13:34:53 crc kubenswrapper[4778]: I0312 13:34:53.559614 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.085385 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.085407 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.103170 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.231252 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.274051 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-sb\") pod \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.274117 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-swift-storage-0\") pod \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.274200 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-svc\") pod \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.274237 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-nb\") pod \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.274272 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-config\") pod \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.274306 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m62md\" (UniqueName: \"kubernetes.io/projected/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-kube-api-access-m62md\") pod \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\" (UID: \"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.326631 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-kube-api-access-m62md" (OuterVolumeSpecName: "kube-api-access-m62md") pod "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" (UID: "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3"). InnerVolumeSpecName "kube-api-access-m62md". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.369454 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" (UID: "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.384150 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpk2x\" (UniqueName: \"kubernetes.io/projected/98a74774-1415-43d1-b278-bead87ab4385-kube-api-access-zpk2x\") pod \"98a74774-1415-43d1-b278-bead87ab4385\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.384369 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-config-data\") pod \"98a74774-1415-43d1-b278-bead87ab4385\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.384833 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-scripts\") pod \"98a74774-1415-43d1-b278-bead87ab4385\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.385009 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-combined-ca-bundle\") pod \"98a74774-1415-43d1-b278-bead87ab4385\" (UID: \"98a74774-1415-43d1-b278-bead87ab4385\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.385873 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.385894 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m62md\" (UniqueName: \"kubernetes.io/projected/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-kube-api-access-m62md\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.392806 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a74774-1415-43d1-b278-bead87ab4385-kube-api-access-zpk2x" (OuterVolumeSpecName: "kube-api-access-zpk2x") pod "98a74774-1415-43d1-b278-bead87ab4385" (UID: "98a74774-1415-43d1-b278-bead87ab4385"). InnerVolumeSpecName "kube-api-access-zpk2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.407387 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-scripts" (OuterVolumeSpecName: "scripts") pod "98a74774-1415-43d1-b278-bead87ab4385" (UID: "98a74774-1415-43d1-b278-bead87ab4385"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.408811 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.432919 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" (UID: "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.450175 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98a74774-1415-43d1-b278-bead87ab4385" (UID: "98a74774-1415-43d1-b278-bead87ab4385"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.466903 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" (UID: "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.468018 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" (UID: "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.469950 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-config" (OuterVolumeSpecName: "config") pod "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" (UID: "43eb6e2e-19ca-402f-a4fa-3b567ef9aef3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.479665 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-config-data" (OuterVolumeSpecName: "config-data") pod "98a74774-1415-43d1-b278-bead87ab4385" (UID: "98a74774-1415-43d1-b278-bead87ab4385"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.489364 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft9qb\" (UniqueName: \"kubernetes.io/projected/f7a2b0f7-9321-4f29-aa01-0acbc528f757-kube-api-access-ft9qb\") pod \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.489475 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-sg-core-conf-yaml\") pod \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.489662 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-run-httpd\") pod \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.489772 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-config-data\") pod \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.489980 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-log-httpd\") pod \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490078 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-combined-ca-bundle\") pod \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490144 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-scripts\") pod \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\" (UID: \"f7a2b0f7-9321-4f29-aa01-0acbc528f757\") " Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490351 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f7a2b0f7-9321-4f29-aa01-0acbc528f757" (UID: "f7a2b0f7-9321-4f29-aa01-0acbc528f757"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490538 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f7a2b0f7-9321-4f29-aa01-0acbc528f757" (UID: "f7a2b0f7-9321-4f29-aa01-0acbc528f757"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490888 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490919 4778 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490932 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490945 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpk2x\" (UniqueName: \"kubernetes.io/projected/98a74774-1415-43d1-b278-bead87ab4385-kube-api-access-zpk2x\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490957 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490970 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490979 4778 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a2b0f7-9321-4f29-aa01-0acbc528f757-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490987 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.490998 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98a74774-1415-43d1-b278-bead87ab4385-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.491011 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.494042 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a2b0f7-9321-4f29-aa01-0acbc528f757-kube-api-access-ft9qb" (OuterVolumeSpecName: "kube-api-access-ft9qb") pod "f7a2b0f7-9321-4f29-aa01-0acbc528f757" (UID: "f7a2b0f7-9321-4f29-aa01-0acbc528f757"). InnerVolumeSpecName "kube-api-access-ft9qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.497555 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-scripts" (OuterVolumeSpecName: "scripts") pod "f7a2b0f7-9321-4f29-aa01-0acbc528f757" (UID: "f7a2b0f7-9321-4f29-aa01-0acbc528f757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.532052 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f7a2b0f7-9321-4f29-aa01-0acbc528f757" (UID: "f7a2b0f7-9321-4f29-aa01-0acbc528f757"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.545054 4778 generic.go:334] "Generic (PLEG): container finished" podID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" containerID="3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549" exitCode=0 Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.545401 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" event={"ID":"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3","Type":"ContainerDied","Data":"3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549"} Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.545431 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" event={"ID":"43eb6e2e-19ca-402f-a4fa-3b567ef9aef3","Type":"ContainerDied","Data":"60f6f77084cfe6904eb9dc78f60c8b66e7fa89e1a236dd4007f1375a76319d3b"} Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.545448 4778 scope.go:117] "RemoveContainer" containerID="3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.545581 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-246x7" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.574257 4778 generic.go:334] "Generic (PLEG): container finished" podID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerID="5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d" exitCode=0 Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.574336 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerDied","Data":"5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d"} Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.574369 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a2b0f7-9321-4f29-aa01-0acbc528f757","Type":"ContainerDied","Data":"b0892c0b95fcd75a23c8ced61d0a214155429e8763fde0aacd038b1f9445ab5d"} Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.574500 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.579582 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qqx6r" event={"ID":"98a74774-1415-43d1-b278-bead87ab4385","Type":"ContainerDied","Data":"94415432161b66df8eaee31803fa6f28ac270cbac75c8be9dbd68e3fe9cda71c"} Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.579641 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94415432161b66df8eaee31803fa6f28ac270cbac75c8be9dbd68e3fe9cda71c" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.579852 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qqx6r" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.593478 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.593543 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft9qb\" (UniqueName: \"kubernetes.io/projected/f7a2b0f7-9321-4f29-aa01-0acbc528f757-kube-api-access-ft9qb\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.593561 4778 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.638208 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7a2b0f7-9321-4f29-aa01-0acbc528f757" (UID: "f7a2b0f7-9321-4f29-aa01-0acbc528f757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.657899 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-config-data" (OuterVolumeSpecName: "config-data") pod "f7a2b0f7-9321-4f29-aa01-0acbc528f757" (UID: "f7a2b0f7-9321-4f29-aa01-0acbc528f757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.694834 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.694886 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a2b0f7-9321-4f29-aa01-0acbc528f757-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.715769 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jfzqk" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="registry-server" probeResult="failure" output=< Mar 12 13:34:54 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:34:54 crc kubenswrapper[4778]: > Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.720091 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.720335 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-log" containerID="cri-o://8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d" gracePeriod=30 Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.720750 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-api" containerID="cri-o://e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e" gracePeriod=30 Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.771492 4778 scope.go:117] "RemoveContainer" containerID="3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.830935 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-246x7"] Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.860093 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-246x7"] Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.878988 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.879309 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerName="nova-metadata-log" containerID="cri-o://bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db" gracePeriod=30 Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.879381 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerName="nova-metadata-metadata" containerID="cri-o://5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e" gracePeriod=30 Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.889572 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.938773 4778 scope.go:117] "RemoveContainer" containerID="3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549" Mar 12 13:34:54 crc kubenswrapper[4778]: E0312 13:34:54.940815 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549\": container with ID starting with 3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549 not found: ID does not exist" containerID="3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.940860 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549"} err="failed to get container status \"3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549\": rpc error: code = NotFound desc = could not find container \"3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549\": container with ID starting with 3acaaf246e65843557136744d8e37d230106fc2f8c2711770c3619615eeab549 not found: ID does not exist" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.940885 4778 scope.go:117] "RemoveContainer" containerID="3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31" Mar 12 13:34:54 crc kubenswrapper[4778]: E0312 13:34:54.946445 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31\": container with ID starting with 3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31 not found: ID does not exist" containerID="3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.946510 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31"} err="failed to get container status \"3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31\": rpc error: code = NotFound desc = could not find container \"3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31\": container with ID starting with 3be056ef8a27b7c5eec8e8d97597ee2f4dfeb1235b2a01b4f17cb1cb7e9cfd31 not found: ID does not exist" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.946568 4778 scope.go:117] "RemoveContainer" containerID="8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf" Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.961592 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:54 crc kubenswrapper[4778]: I0312 13:34:54.974371 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.000942 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.008211 4778 scope.go:117] "RemoveContainer" containerID="90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.046364 4778 scope.go:117] "RemoveContainer" containerID="5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.055782 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.056554 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="sg-core" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.056653 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="sg-core" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.056993 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="ceilometer-central-agent" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.057109 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="ceilometer-central-agent" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.057325 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="proxy-httpd" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.057431 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="proxy-httpd" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.057554 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a74774-1415-43d1-b278-bead87ab4385" containerName="nova-manage" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.057635 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a74774-1415-43d1-b278-bead87ab4385" containerName="nova-manage" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.057714 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58dfb2fb-928e-46de-90dd-481c91a7727c" containerName="nova-cell1-conductor-db-sync" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.057818 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="58dfb2fb-928e-46de-90dd-481c91a7727c" containerName="nova-cell1-conductor-db-sync" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.057907 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" containerName="init" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.058194 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" containerName="init" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.058847 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" containerName="dnsmasq-dns" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.058936 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" containerName="dnsmasq-dns" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.059038 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="ceilometer-notification-agent" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.059582 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="ceilometer-notification-agent" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.060011 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="sg-core" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.060131 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" containerName="dnsmasq-dns" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.060241 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="proxy-httpd" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.060367 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="ceilometer-notification-agent" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.060461 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" containerName="ceilometer-central-agent" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.060555 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a74774-1415-43d1-b278-bead87ab4385" containerName="nova-manage" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.062175 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="58dfb2fb-928e-46de-90dd-481c91a7727c" containerName="nova-cell1-conductor-db-sync" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.065092 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.072644 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.074901 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.075096 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.083141 4778 scope.go:117] "RemoveContainer" containerID="07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.084344 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.109331 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqndb\" (UniqueName: \"kubernetes.io/projected/58dfb2fb-928e-46de-90dd-481c91a7727c-kube-api-access-gqndb\") pod \"58dfb2fb-928e-46de-90dd-481c91a7727c\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.109384 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-config-data\") pod \"58dfb2fb-928e-46de-90dd-481c91a7727c\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.109467 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-combined-ca-bundle\") pod \"58dfb2fb-928e-46de-90dd-481c91a7727c\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.109544 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-scripts\") pod \"58dfb2fb-928e-46de-90dd-481c91a7727c\" (UID: \"58dfb2fb-928e-46de-90dd-481c91a7727c\") " Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.110077 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.110152 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xnp\" (UniqueName: \"kubernetes.io/projected/e1488e83-3a44-41ad-aa96-de09b662c16e-kube-api-access-82xnp\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.110288 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-scripts\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.110319 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.110361 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.110386 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-run-httpd\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.110432 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-log-httpd\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.110455 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-config-data\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.116455 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58dfb2fb-928e-46de-90dd-481c91a7727c-kube-api-access-gqndb" (OuterVolumeSpecName: "kube-api-access-gqndb") pod "58dfb2fb-928e-46de-90dd-481c91a7727c" (UID: "58dfb2fb-928e-46de-90dd-481c91a7727c"). InnerVolumeSpecName "kube-api-access-gqndb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.118862 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-scripts" (OuterVolumeSpecName: "scripts") pod "58dfb2fb-928e-46de-90dd-481c91a7727c" (UID: "58dfb2fb-928e-46de-90dd-481c91a7727c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.145325 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-config-data" (OuterVolumeSpecName: "config-data") pod "58dfb2fb-928e-46de-90dd-481c91a7727c" (UID: "58dfb2fb-928e-46de-90dd-481c91a7727c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.148432 4778 scope.go:117] "RemoveContainer" containerID="8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.148924 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf\": container with ID starting with 8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf not found: ID does not exist" containerID="8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.148968 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf"} err="failed to get container status \"8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf\": rpc error: code = NotFound desc = could not find container \"8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf\": container with ID starting with 8982aa7ca4022874e570ac6c59742be94301e2efa10c45e382ecf26ed4330ecf not found: ID does not exist" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.148999 4778 scope.go:117] "RemoveContainer" containerID="90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.149396 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c\": container with ID starting with 90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c not found: ID does not exist" containerID="90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.149495 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c"} err="failed to get container status \"90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c\": rpc error: code = NotFound desc = could not find container \"90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c\": container with ID starting with 90e25ee50c06dc750c5cd92c70ddda8f57bcd2a0439070e9a3f541bb4ee1e11c not found: ID does not exist" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.149585 4778 scope.go:117] "RemoveContainer" containerID="5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.149794 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58dfb2fb-928e-46de-90dd-481c91a7727c" (UID: "58dfb2fb-928e-46de-90dd-481c91a7727c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.149914 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d\": container with ID starting with 5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d not found: ID does not exist" containerID="5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.149941 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d"} err="failed to get container status \"5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d\": rpc error: code = NotFound desc = could not find container \"5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d\": container with ID starting with 5a062ded3dfdf7e0b93cfe2d1cae5ba57a787eba6993d1798acc3431826d3e6d not found: ID does not exist" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.149962 4778 scope.go:117] "RemoveContainer" containerID="07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.150363 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67\": container with ID starting with 07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67 not found: ID does not exist" containerID="07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.150431 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67"} err="failed to get container status \"07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67\": rpc error: code = NotFound desc = could not find container \"07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67\": container with ID starting with 07b0c82b03265af2d0340c98ef0951f73004b6ed91b6e9f4e0518b57f5492a67 not found: ID does not exist" Mar 12 13:34:55 crc kubenswrapper[4778]: E0312 13:34:55.171593 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4697dd3e_2fbd_4855_819e_bdd6f0d9cfe1.slice/crio-8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4697dd3e_2fbd_4855_819e_bdd6f0d9cfe1.slice/crio-conmon-8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43eb6e2e_19ca_402f_a4fa_3b567ef9aef3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98a74774_1415_43d1_b278_bead87ab4385.slice/crio-94415432161b66df8eaee31803fa6f28ac270cbac75c8be9dbd68e3fe9cda71c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43eb6e2e_19ca_402f_a4fa_3b567ef9aef3.slice/crio-60f6f77084cfe6904eb9dc78f60c8b66e7fa89e1a236dd4007f1375a76319d3b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98a74774_1415_43d1_b278_bead87ab4385.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf8c2c79_f773_4580_ad43_3dcbfced2f86.slice/crio-bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf8c2c79_f773_4580_ad43_3dcbfced2f86.slice/crio-5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e.scope\": RecentStats: unable to find data in memory cache]" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214540 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-scripts\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214608 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214660 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214681 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-run-httpd\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214721 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-log-httpd\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214740 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-config-data\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214775 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214824 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xnp\" (UniqueName: \"kubernetes.io/projected/e1488e83-3a44-41ad-aa96-de09b662c16e-kube-api-access-82xnp\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214905 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqndb\" (UniqueName: \"kubernetes.io/projected/58dfb2fb-928e-46de-90dd-481c91a7727c-kube-api-access-gqndb\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214918 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214931 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.214941 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58dfb2fb-928e-46de-90dd-481c91a7727c-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.215751 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-run-httpd\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.215921 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-log-httpd\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.221417 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-scripts\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.221599 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-config-data\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.225741 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.227346 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.227955 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:55 crc kubenswrapper[4778]: I0312 13:34:55.229656 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xnp\" (UniqueName: \"kubernetes.io/projected/e1488e83-3a44-41ad-aa96-de09b662c16e-kube-api-access-82xnp\") pod \"ceilometer-0\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " pod="openstack/ceilometer-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.383058 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.447240 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.523169 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-config-data\") pod \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.523338 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmpvk\" (UniqueName: \"kubernetes.io/projected/cf8c2c79-f773-4580-ad43-3dcbfced2f86-kube-api-access-pmpvk\") pod \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.523409 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-combined-ca-bundle\") pod \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.523489 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8c2c79-f773-4580-ad43-3dcbfced2f86-logs\") pod \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.523527 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-nova-metadata-tls-certs\") pod \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\" (UID: \"cf8c2c79-f773-4580-ad43-3dcbfced2f86\") " Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.524592 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf8c2c79-f773-4580-ad43-3dcbfced2f86-logs" (OuterVolumeSpecName: "logs") pod "cf8c2c79-f773-4580-ad43-3dcbfced2f86" (UID: "cf8c2c79-f773-4580-ad43-3dcbfced2f86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.530092 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8c2c79-f773-4580-ad43-3dcbfced2f86-kube-api-access-pmpvk" (OuterVolumeSpecName: "kube-api-access-pmpvk") pod "cf8c2c79-f773-4580-ad43-3dcbfced2f86" (UID: "cf8c2c79-f773-4580-ad43-3dcbfced2f86"). InnerVolumeSpecName "kube-api-access-pmpvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.571331 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-config-data" (OuterVolumeSpecName: "config-data") pod "cf8c2c79-f773-4580-ad43-3dcbfced2f86" (UID: "cf8c2c79-f773-4580-ad43-3dcbfced2f86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.607122 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf8c2c79-f773-4580-ad43-3dcbfced2f86" (UID: "cf8c2c79-f773-4580-ad43-3dcbfced2f86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.640566 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: E0312 13:34:55.641080 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerName="nova-metadata-metadata" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.641098 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerName="nova-metadata-metadata" Mar 12 13:34:56 crc kubenswrapper[4778]: E0312 13:34:55.641126 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerName="nova-metadata-log" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.641135 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerName="nova-metadata-log" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.641345 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerName="nova-metadata-metadata" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.641367 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerName="nova-metadata-log" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.642740 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.648032 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.648063 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf8c2c79-f773-4580-ad43-3dcbfced2f86-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.648073 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.648082 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmpvk\" (UniqueName: \"kubernetes.io/projected/cf8c2c79-f773-4580-ad43-3dcbfced2f86-kube-api-access-pmpvk\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.666497 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.691693 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.692071 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7dlt6" event={"ID":"58dfb2fb-928e-46de-90dd-481c91a7727c","Type":"ContainerDied","Data":"6616a6464fcc9dedf4bc63acdc82e9b9e7114af17dd2023df257ae235015b89a"} Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.692376 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6616a6464fcc9dedf4bc63acdc82e9b9e7114af17dd2023df257ae235015b89a" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.692389 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cf8c2c79-f773-4580-ad43-3dcbfced2f86" (UID: "cf8c2c79-f773-4580-ad43-3dcbfced2f86"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.729798 4778 generic.go:334] "Generic (PLEG): container finished" podID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerID="5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e" exitCode=0 Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.729836 4778 generic.go:334] "Generic (PLEG): container finished" podID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" containerID="bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db" exitCode=143 Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.729903 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8c2c79-f773-4580-ad43-3dcbfced2f86","Type":"ContainerDied","Data":"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e"} Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.729930 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8c2c79-f773-4580-ad43-3dcbfced2f86","Type":"ContainerDied","Data":"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db"} Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.729940 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cf8c2c79-f773-4580-ad43-3dcbfced2f86","Type":"ContainerDied","Data":"4ec39d8d5b0fe3157074d759d6c8d58bd3fe2afde569fbe34129a0aeb9260cbc"} Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.729954 4778 scope.go:117] "RemoveContainer" containerID="5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.730093 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.754471 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srnlz\" (UniqueName: \"kubernetes.io/projected/e28e8bc2-4b60-447e-b78e-99f53f0559e9-kube-api-access-srnlz\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.754984 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.755368 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.755795 4778 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf8c2c79-f773-4580-ad43-3dcbfced2f86-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.756674 4778 generic.go:334] "Generic (PLEG): container finished" podID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerID="8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d" exitCode=143 Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.756752 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1","Type":"ContainerDied","Data":"8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d"} Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.758087 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="65ad5500-0148-42d4-a597-53e265081516" containerName="nova-scheduler-scheduler" containerID="cri-o://4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1" gracePeriod=30 Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.792659 4778 scope.go:117] "RemoveContainer" containerID="bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.820703 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.828569 4778 scope.go:117] "RemoveContainer" containerID="5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e" Mar 12 13:34:56 crc kubenswrapper[4778]: E0312 13:34:55.839971 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e\": container with ID starting with 5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e not found: ID does not exist" containerID="5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.840018 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e"} err="failed to get container status \"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e\": rpc error: code = NotFound desc = could not find container \"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e\": container with ID starting with 5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e not found: ID does not exist" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.840043 4778 scope.go:117] "RemoveContainer" containerID="bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db" Mar 12 13:34:56 crc kubenswrapper[4778]: E0312 13:34:55.843228 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db\": container with ID starting with bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db not found: ID does not exist" containerID="bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.843250 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db"} err="failed to get container status \"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db\": rpc error: code = NotFound desc = could not find container \"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db\": container with ID starting with bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db not found: ID does not exist" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.843266 4778 scope.go:117] "RemoveContainer" containerID="5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.843656 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e"} err="failed to get container status \"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e\": rpc error: code = NotFound desc = could not find container \"5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e\": container with ID starting with 5f8a4b137c2d402a6e035d8c3ee7d11f7df1ef398c865204e7eeb1039c06313e not found: ID does not exist" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.843690 4778 scope.go:117] "RemoveContainer" containerID="bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.843952 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db"} err="failed to get container status \"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db\": rpc error: code = NotFound desc = could not find container \"bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db\": container with ID starting with bef232eba49477a7b76b95769657b7c70c9d288b1dd88486202bf1d8cbd9a8db not found: ID does not exist" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.857217 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srnlz\" (UniqueName: \"kubernetes.io/projected/e28e8bc2-4b60-447e-b78e-99f53f0559e9-kube-api-access-srnlz\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.857255 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.857354 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.869051 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.872778 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.881052 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.885824 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srnlz\" (UniqueName: \"kubernetes.io/projected/e28e8bc2-4b60-447e-b78e-99f53f0559e9-kube-api-access-srnlz\") pod \"nova-cell1-conductor-0\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.891915 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.895340 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.899693 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.901576 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.919709 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.960733 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-config-data\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.960959 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2700355-e048-4458-b430-8d149a08d624-logs\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.961040 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.961292 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:55.961389 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjl2\" (UniqueName: \"kubernetes.io/projected/c2700355-e048-4458-b430-8d149a08d624-kube-api-access-jbjl2\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.062927 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2700355-e048-4458-b430-8d149a08d624-logs\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.062977 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.063039 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.063072 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjl2\" (UniqueName: \"kubernetes.io/projected/c2700355-e048-4458-b430-8d149a08d624-kube-api-access-jbjl2\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.063117 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-config-data\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.063537 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2700355-e048-4458-b430-8d149a08d624-logs\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.066847 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-config-data\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.069243 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.069984 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.080812 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjl2\" (UniqueName: \"kubernetes.io/projected/c2700355-e048-4458-b430-8d149a08d624-kube-api-access-jbjl2\") pod \"nova-metadata-0\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.088904 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.232498 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.275682 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43eb6e2e-19ca-402f-a4fa-3b567ef9aef3" path="/var/lib/kubelet/pods/43eb6e2e-19ca-402f-a4fa-3b567ef9aef3/volumes" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.276686 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf8c2c79-f773-4580-ad43-3dcbfced2f86" path="/var/lib/kubelet/pods/cf8c2c79-f773-4580-ad43-3dcbfced2f86/volumes" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.277395 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a2b0f7-9321-4f29-aa01-0acbc528f757" path="/var/lib/kubelet/pods/f7a2b0f7-9321-4f29-aa01-0acbc528f757/volumes" Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.552021 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.633494 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.740614 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.791311 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2700355-e048-4458-b430-8d149a08d624","Type":"ContainerStarted","Data":"4cdf23596db0a1e92716b41df3f9c56dd37f21ba73a9653782acae39684ee3dd"} Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.792902 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerStarted","Data":"9b203bf5890b3e4b6703e78b53a4c6b888b8bd4da20a4a2f1d502507cc246b88"} Mar 12 13:34:56 crc kubenswrapper[4778]: I0312 13:34:56.795408 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e28e8bc2-4b60-447e-b78e-99f53f0559e9","Type":"ContainerStarted","Data":"0b4aff5eb3ee6cc75fbeaaa57c05dff4153b4e03f714593a99c2f4d9aa7da572"} Mar 12 13:34:57 crc kubenswrapper[4778]: I0312 13:34:57.807990 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2700355-e048-4458-b430-8d149a08d624","Type":"ContainerStarted","Data":"18451788e6f6468b69f6150e59f0635d08ad6db357c610ae673d149c136dfeb2"} Mar 12 13:34:57 crc kubenswrapper[4778]: I0312 13:34:57.808472 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2700355-e048-4458-b430-8d149a08d624","Type":"ContainerStarted","Data":"ca24f0adae0376e480d75f053859a82c49878f39def1a1831162119084f0dc4d"} Mar 12 13:34:57 crc kubenswrapper[4778]: I0312 13:34:57.812622 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerStarted","Data":"61456076e8380a1062d0214a24bdfe0fa640e7ee4451d17b11add3187cfaf9ad"} Mar 12 13:34:57 crc kubenswrapper[4778]: I0312 13:34:57.816623 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e28e8bc2-4b60-447e-b78e-99f53f0559e9","Type":"ContainerStarted","Data":"17f6ecc58bfeead13bd408fa3389fcd5b9ea0127020d364f507d2277de0d4c6f"} Mar 12 13:34:57 crc kubenswrapper[4778]: I0312 13:34:57.817440 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 12 13:34:57 crc kubenswrapper[4778]: I0312 13:34:57.830766 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.830744751 podStartE2EDuration="2.830744751s" podCreationTimestamp="2026-03-12 13:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:34:57.825465861 +0000 UTC m=+1516.274161277" watchObservedRunningTime="2026-03-12 13:34:57.830744751 +0000 UTC m=+1516.279440147" Mar 12 13:34:57 crc kubenswrapper[4778]: I0312 13:34:57.860564 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.860544857 podStartE2EDuration="2.860544857s" podCreationTimestamp="2026-03-12 13:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:34:57.851308065 +0000 UTC m=+1516.300003481" watchObservedRunningTime="2026-03-12 13:34:57.860544857 +0000 UTC m=+1516.309240253" Mar 12 13:34:58 crc kubenswrapper[4778]: E0312 13:34:58.178523 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:34:58 crc kubenswrapper[4778]: E0312 13:34:58.180265 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:34:58 crc kubenswrapper[4778]: E0312 13:34:58.183624 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:34:58 crc kubenswrapper[4778]: E0312 13:34:58.183684 4778 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="65ad5500-0148-42d4-a597-53e265081516" containerName="nova-scheduler-scheduler" Mar 12 13:34:58 crc kubenswrapper[4778]: I0312 13:34:58.830117 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerStarted","Data":"825e28bca3cf084ec7f1951f758972b6df54d50fc49463a251a39ebce8dc6ce1"} Mar 12 13:34:59 crc kubenswrapper[4778]: I0312 13:34:59.840073 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerStarted","Data":"5fb44ccb1d5cc41dbcf7c6e5acea797394b81866acaa080b282103d25f4131bf"} Mar 12 13:34:59 crc kubenswrapper[4778]: I0312 13:34:59.935433 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.465554 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.565828 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-combined-ca-bundle\") pod \"65ad5500-0148-42d4-a597-53e265081516\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.565891 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6cq\" (UniqueName: \"kubernetes.io/projected/65ad5500-0148-42d4-a597-53e265081516-kube-api-access-ps6cq\") pod \"65ad5500-0148-42d4-a597-53e265081516\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.566028 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-config-data\") pod \"65ad5500-0148-42d4-a597-53e265081516\" (UID: \"65ad5500-0148-42d4-a597-53e265081516\") " Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.571780 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ad5500-0148-42d4-a597-53e265081516-kube-api-access-ps6cq" (OuterVolumeSpecName: "kube-api-access-ps6cq") pod "65ad5500-0148-42d4-a597-53e265081516" (UID: "65ad5500-0148-42d4-a597-53e265081516"). InnerVolumeSpecName "kube-api-access-ps6cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.606481 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65ad5500-0148-42d4-a597-53e265081516" (UID: "65ad5500-0148-42d4-a597-53e265081516"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.629403 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-config-data" (OuterVolumeSpecName: "config-data") pod "65ad5500-0148-42d4-a597-53e265081516" (UID: "65ad5500-0148-42d4-a597-53e265081516"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.630471 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.667200 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-logs\") pod \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.667308 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsj8q\" (UniqueName: \"kubernetes.io/projected/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-kube-api-access-qsj8q\") pod \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.667443 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-config-data\") pod \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.667534 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-combined-ca-bundle\") pod \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\" (UID: \"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1\") " Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.667849 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-logs" (OuterVolumeSpecName: "logs") pod "4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" (UID: "4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.668245 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.668272 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6cq\" (UniqueName: \"kubernetes.io/projected/65ad5500-0148-42d4-a597-53e265081516-kube-api-access-ps6cq\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.668284 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.668293 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65ad5500-0148-42d4-a597-53e265081516-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.671000 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-kube-api-access-qsj8q" (OuterVolumeSpecName: "kube-api-access-qsj8q") pod "4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" (UID: "4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1"). InnerVolumeSpecName "kube-api-access-qsj8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.703554 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" (UID: "4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.712305 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-config-data" (OuterVolumeSpecName: "config-data") pod "4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" (UID: "4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.769903 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.770255 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.770274 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsj8q\" (UniqueName: \"kubernetes.io/projected/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1-kube-api-access-qsj8q\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.849889 4778 generic.go:334] "Generic (PLEG): container finished" podID="65ad5500-0148-42d4-a597-53e265081516" containerID="4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1" exitCode=0 Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.849953 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.849962 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65ad5500-0148-42d4-a597-53e265081516","Type":"ContainerDied","Data":"4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1"} Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.850854 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"65ad5500-0148-42d4-a597-53e265081516","Type":"ContainerDied","Data":"53204891ba93c9dcc714e4cf6732ebcf66cfe563b2c5b0d6b993dd7bb498dfcd"} Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.850884 4778 scope.go:117] "RemoveContainer" containerID="4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.860267 4778 generic.go:334] "Generic (PLEG): container finished" podID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerID="e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e" exitCode=0 Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.860313 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1","Type":"ContainerDied","Data":"e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e"} Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.860342 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1","Type":"ContainerDied","Data":"9d69b802526d361f0ba3ab145439034eaadd80c18ca540ff35313a518907cc83"} Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.860411 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.895352 4778 scope.go:117] "RemoveContainer" containerID="4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1" Mar 12 13:35:00 crc kubenswrapper[4778]: E0312 13:35:00.909310 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1\": container with ID starting with 4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1 not found: ID does not exist" containerID="4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.909387 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1"} err="failed to get container status \"4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1\": rpc error: code = NotFound desc = could not find container \"4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1\": container with ID starting with 4a58f69bc959200337168fc6797ed9ced69f8a04dbe14d36ec0e69b2498fb5e1 not found: ID does not exist" Mar 12 13:35:00 crc kubenswrapper[4778]: I0312 13:35:00.909416 4778 scope.go:117] "RemoveContainer" containerID="e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.017261 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.026585 4778 scope.go:117] "RemoveContainer" containerID="8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.042630 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.061254 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.079377 4778 scope.go:117] "RemoveContainer" containerID="e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e" Mar 12 13:35:01 crc kubenswrapper[4778]: E0312 13:35:01.079836 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e\": container with ID starting with e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e not found: ID does not exist" containerID="e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.079872 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e"} err="failed to get container status \"e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e\": rpc error: code = NotFound desc = could not find container \"e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e\": container with ID starting with e10968e0aa6d0184f80649b4d85f94854a9e9ed4e143833199a6895350db927e not found: ID does not exist" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.079893 4778 scope.go:117] "RemoveContainer" containerID="8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d" Mar 12 13:35:01 crc kubenswrapper[4778]: E0312 13:35:01.081240 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d\": container with ID starting with 8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d not found: ID does not exist" containerID="8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.081262 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d"} err="failed to get container status \"8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d\": rpc error: code = NotFound desc = could not find container \"8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d\": container with ID starting with 8ff6ebf3b7b0b27c32ef14b9af9d9ad2cb5eb0cd0fcf6c338e931544b524d41d not found: ID does not exist" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.091537 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.112301 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:01 crc kubenswrapper[4778]: E0312 13:35:01.112823 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-api" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.112848 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-api" Mar 12 13:35:01 crc kubenswrapper[4778]: E0312 13:35:01.112892 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-log" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.112902 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-log" Mar 12 13:35:01 crc kubenswrapper[4778]: E0312 13:35:01.112920 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65ad5500-0148-42d4-a597-53e265081516" containerName="nova-scheduler-scheduler" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.112929 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65ad5500-0148-42d4-a597-53e265081516" containerName="nova-scheduler-scheduler" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.113158 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65ad5500-0148-42d4-a597-53e265081516" containerName="nova-scheduler-scheduler" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.113209 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-api" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.113229 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" containerName="nova-api-log" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.114026 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.116051 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.127824 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.133052 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.141576 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.143550 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.145472 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.153641 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.284143 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/045050c5-d52b-4532-baa1-e7fad66cba96-logs\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.284201 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xshlg\" (UniqueName: \"kubernetes.io/projected/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-kube-api-access-xshlg\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.284284 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.284311 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-config-data\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.284331 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-config-data\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.284424 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.284462 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5p7z\" (UniqueName: \"kubernetes.io/projected/045050c5-d52b-4532-baa1-e7fad66cba96-kube-api-access-r5p7z\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.386436 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.386491 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-config-data\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.386534 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-config-data\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.386651 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.386991 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5p7z\" (UniqueName: \"kubernetes.io/projected/045050c5-d52b-4532-baa1-e7fad66cba96-kube-api-access-r5p7z\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.387506 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/045050c5-d52b-4532-baa1-e7fad66cba96-logs\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.387536 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xshlg\" (UniqueName: \"kubernetes.io/projected/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-kube-api-access-xshlg\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.388120 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/045050c5-d52b-4532-baa1-e7fad66cba96-logs\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.392988 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.393470 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-config-data\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.405125 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-config-data\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.409803 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.410655 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5p7z\" (UniqueName: \"kubernetes.io/projected/045050c5-d52b-4532-baa1-e7fad66cba96-kube-api-access-r5p7z\") pod \"nova-api-0\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.419026 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xshlg\" (UniqueName: \"kubernetes.io/projected/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-kube-api-access-xshlg\") pod \"nova-scheduler-0\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.568785 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.594165 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.877832 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerStarted","Data":"819e42fd8accff60f320def9e9ec88d7d0b64eac8391a4dca82bd182d50ec648"} Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.878127 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 13:35:01 crc kubenswrapper[4778]: I0312 13:35:01.901719 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.37174239 podStartE2EDuration="7.901704677s" podCreationTimestamp="2026-03-12 13:34:54 +0000 UTC" firstStartedPulling="2026-03-12 13:34:56.56441608 +0000 UTC m=+1515.013111476" lastFinishedPulling="2026-03-12 13:35:01.094378357 +0000 UTC m=+1519.543073763" observedRunningTime="2026-03-12 13:35:01.898144036 +0000 UTC m=+1520.346839432" watchObservedRunningTime="2026-03-12 13:35:01.901704677 +0000 UTC m=+1520.350400073" Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.077870 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:02 crc kubenswrapper[4778]: W0312 13:35:02.084928 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc6f909_0ff5_4f18_a480_fd8e6cda5e53.slice/crio-65fd579a6354c4ba6b71d144e33926dc6f3bb53ede6b497f49fd5c5be99e7ee4 WatchSource:0}: Error finding container 65fd579a6354c4ba6b71d144e33926dc6f3bb53ede6b497f49fd5c5be99e7ee4: Status 404 returned error can't find the container with id 65fd579a6354c4ba6b71d144e33926dc6f3bb53ede6b497f49fd5c5be99e7ee4 Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.186866 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.267685 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1" path="/var/lib/kubelet/pods/4697dd3e-2fbd-4855-819e-bdd6f0d9cfe1/volumes" Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.268737 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ad5500-0148-42d4-a597-53e265081516" path="/var/lib/kubelet/pods/65ad5500-0148-42d4-a597-53e265081516/volumes" Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.887512 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"045050c5-d52b-4532-baa1-e7fad66cba96","Type":"ContainerStarted","Data":"cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40"} Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.888132 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"045050c5-d52b-4532-baa1-e7fad66cba96","Type":"ContainerStarted","Data":"e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e"} Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.888160 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"045050c5-d52b-4532-baa1-e7fad66cba96","Type":"ContainerStarted","Data":"56c1ee62c0c52d6bc024a53ffbb320a0eb1a79762a1537bb2caf3aafa91e73ce"} Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.889854 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53","Type":"ContainerStarted","Data":"dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a"} Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.889907 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53","Type":"ContainerStarted","Data":"65fd579a6354c4ba6b71d144e33926dc6f3bb53ede6b497f49fd5c5be99e7ee4"} Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.915418 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.915399052 podStartE2EDuration="2.915399052s" podCreationTimestamp="2026-03-12 13:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:02.912299594 +0000 UTC m=+1521.360995020" watchObservedRunningTime="2026-03-12 13:35:02.915399052 +0000 UTC m=+1521.364094448" Mar 12 13:35:02 crc kubenswrapper[4778]: I0312 13:35:02.940252 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.940227076 podStartE2EDuration="2.940227076s" podCreationTimestamp="2026-03-12 13:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:02.930388307 +0000 UTC m=+1521.379083703" watchObservedRunningTime="2026-03-12 13:35:02.940227076 +0000 UTC m=+1521.388922472" Mar 12 13:35:04 crc kubenswrapper[4778]: I0312 13:35:04.698516 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jfzqk" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="registry-server" probeResult="failure" output=< Mar 12 13:35:04 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:35:04 crc kubenswrapper[4778]: > Mar 12 13:35:06 crc kubenswrapper[4778]: I0312 13:35:06.233777 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 13:35:06 crc kubenswrapper[4778]: I0312 13:35:06.235393 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 13:35:06 crc kubenswrapper[4778]: I0312 13:35:06.569901 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 13:35:07 crc kubenswrapper[4778]: I0312 13:35:07.251150 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:35:07 crc kubenswrapper[4778]: I0312 13:35:07.251206 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:35:11 crc kubenswrapper[4778]: I0312 13:35:11.569268 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 13:35:11 crc kubenswrapper[4778]: I0312 13:35:11.594488 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 13:35:11 crc kubenswrapper[4778]: I0312 13:35:11.594585 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 13:35:11 crc kubenswrapper[4778]: I0312 13:35:11.617260 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 13:35:12 crc kubenswrapper[4778]: I0312 13:35:12.005012 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 13:35:12 crc kubenswrapper[4778]: I0312 13:35:12.636508 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:35:12 crc kubenswrapper[4778]: I0312 13:35:12.636606 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:35:14 crc kubenswrapper[4778]: I0312 13:35:14.232584 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 13:35:14 crc kubenswrapper[4778]: I0312 13:35:14.232635 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 13:35:14 crc kubenswrapper[4778]: I0312 13:35:14.691967 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jfzqk" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="registry-server" probeResult="failure" output=< Mar 12 13:35:14 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:35:14 crc kubenswrapper[4778]: > Mar 12 13:35:15 crc kubenswrapper[4778]: I0312 13:35:15.506006 4778 scope.go:117] "RemoveContainer" containerID="c06e4e1b6c58e04407e154a6eb32ce96d2dfbf0e7e2f81409f2e784cc2f29542" Mar 12 13:35:16 crc kubenswrapper[4778]: I0312 13:35:16.237902 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 13:35:16 crc kubenswrapper[4778]: I0312 13:35:16.240133 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 13:35:16 crc kubenswrapper[4778]: I0312 13:35:16.244325 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 13:35:17 crc kubenswrapper[4778]: I0312 13:35:17.021919 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 13:35:18 crc kubenswrapper[4778]: I0312 13:35:18.818073 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:18 crc kubenswrapper[4778]: I0312 13:35:18.933204 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-config-data\") pod \"d42d33e8-c530-4272-90a4-f0ef9b061927\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " Mar 12 13:35:18 crc kubenswrapper[4778]: I0312 13:35:18.933308 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv5rr\" (UniqueName: \"kubernetes.io/projected/d42d33e8-c530-4272-90a4-f0ef9b061927-kube-api-access-hv5rr\") pod \"d42d33e8-c530-4272-90a4-f0ef9b061927\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " Mar 12 13:35:18 crc kubenswrapper[4778]: I0312 13:35:18.933453 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-combined-ca-bundle\") pod \"d42d33e8-c530-4272-90a4-f0ef9b061927\" (UID: \"d42d33e8-c530-4272-90a4-f0ef9b061927\") " Mar 12 13:35:18 crc kubenswrapper[4778]: I0312 13:35:18.946339 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42d33e8-c530-4272-90a4-f0ef9b061927-kube-api-access-hv5rr" (OuterVolumeSpecName: "kube-api-access-hv5rr") pod "d42d33e8-c530-4272-90a4-f0ef9b061927" (UID: "d42d33e8-c530-4272-90a4-f0ef9b061927"). InnerVolumeSpecName "kube-api-access-hv5rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:18 crc kubenswrapper[4778]: I0312 13:35:18.960588 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-config-data" (OuterVolumeSpecName: "config-data") pod "d42d33e8-c530-4272-90a4-f0ef9b061927" (UID: "d42d33e8-c530-4272-90a4-f0ef9b061927"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:18 crc kubenswrapper[4778]: I0312 13:35:18.963110 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d42d33e8-c530-4272-90a4-f0ef9b061927" (UID: "d42d33e8-c530-4272-90a4-f0ef9b061927"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.035330 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.035366 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d42d33e8-c530-4272-90a4-f0ef9b061927-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.035377 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv5rr\" (UniqueName: \"kubernetes.io/projected/d42d33e8-c530-4272-90a4-f0ef9b061927-kube-api-access-hv5rr\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.037589 4778 generic.go:334] "Generic (PLEG): container finished" podID="d42d33e8-c530-4272-90a4-f0ef9b061927" containerID="2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af" exitCode=137 Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.037621 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.037652 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d42d33e8-c530-4272-90a4-f0ef9b061927","Type":"ContainerDied","Data":"2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af"} Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.037686 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d42d33e8-c530-4272-90a4-f0ef9b061927","Type":"ContainerDied","Data":"190ba154912f1afd6c8afdd589f19abb7d2fb48d3910a0516eb35d087148f5e4"} Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.037703 4778 scope.go:117] "RemoveContainer" containerID="2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.059594 4778 scope.go:117] "RemoveContainer" containerID="2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af" Mar 12 13:35:19 crc kubenswrapper[4778]: E0312 13:35:19.059920 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af\": container with ID starting with 2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af not found: ID does not exist" containerID="2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.060026 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af"} err="failed to get container status \"2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af\": rpc error: code = NotFound desc = could not find container \"2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af\": container with ID starting with 2daa8ef0c43b0a0e16322a9531b1ccfd1b86a58c1ab4dbd58ffa5e731b6266af not found: ID does not exist" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.075934 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.085972 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.107491 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:35:19 crc kubenswrapper[4778]: E0312 13:35:19.107939 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42d33e8-c530-4272-90a4-f0ef9b061927" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.107956 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42d33e8-c530-4272-90a4-f0ef9b061927" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.108141 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42d33e8-c530-4272-90a4-f0ef9b061927" containerName="nova-cell1-novncproxy-novncproxy" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.108846 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.113823 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.114067 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.114323 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.118802 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.238097 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.238177 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqmm\" (UniqueName: \"kubernetes.io/projected/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-kube-api-access-fmqmm\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.238225 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.238279 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.238303 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.340172 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmqmm\" (UniqueName: \"kubernetes.io/projected/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-kube-api-access-fmqmm\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.340268 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.340372 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.340403 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.340500 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.344149 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.344626 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.344889 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.345058 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.356163 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmqmm\" (UniqueName: \"kubernetes.io/projected/2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7-kube-api-access-fmqmm\") pod \"nova-cell1-novncproxy-0\" (UID: \"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7\") " pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.432284 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.594626 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.595370 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 13:35:19 crc kubenswrapper[4778]: I0312 13:35:19.850368 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 12 13:35:19 crc kubenswrapper[4778]: W0312 13:35:19.856816 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b43a8b1_b8bc_4ab5_af66_674fa7ff47d7.slice/crio-fb2cd569d81e6acd2dd3f3c077e284749a8ed5785ff148caac8ecbc2aef7b69c WatchSource:0}: Error finding container fb2cd569d81e6acd2dd3f3c077e284749a8ed5785ff148caac8ecbc2aef7b69c: Status 404 returned error can't find the container with id fb2cd569d81e6acd2dd3f3c077e284749a8ed5785ff148caac8ecbc2aef7b69c Mar 12 13:35:20 crc kubenswrapper[4778]: I0312 13:35:20.050166 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7","Type":"ContainerStarted","Data":"fb2cd569d81e6acd2dd3f3c077e284749a8ed5785ff148caac8ecbc2aef7b69c"} Mar 12 13:35:20 crc kubenswrapper[4778]: I0312 13:35:20.265590 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42d33e8-c530-4272-90a4-f0ef9b061927" path="/var/lib/kubelet/pods/d42d33e8-c530-4272-90a4-f0ef9b061927/volumes" Mar 12 13:35:21 crc kubenswrapper[4778]: I0312 13:35:21.062158 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7","Type":"ContainerStarted","Data":"190d97198e753f9e9071d5fc2d25934da1a1861cf6d3a0b1fe6071cce7b119aa"} Mar 12 13:35:21 crc kubenswrapper[4778]: I0312 13:35:21.083756 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.083735891 podStartE2EDuration="2.083735891s" podCreationTimestamp="2026-03-12 13:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:21.076171136 +0000 UTC m=+1539.524866542" watchObservedRunningTime="2026-03-12 13:35:21.083735891 +0000 UTC m=+1539.532431307" Mar 12 13:35:21 crc kubenswrapper[4778]: I0312 13:35:21.599257 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 13:35:21 crc kubenswrapper[4778]: I0312 13:35:21.599738 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 13:35:21 crc kubenswrapper[4778]: I0312 13:35:21.602491 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.074960 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.332557 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vbzn5"] Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.356506 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vbzn5"] Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.356630 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.498892 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.499049 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.499122 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.499279 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.499330 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ccj\" (UniqueName: \"kubernetes.io/projected/d621990b-b3fb-457c-a7b8-0726fa89a5e6-kube-api-access-b5ccj\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.499499 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-config\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.601252 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.601382 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.601426 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.601451 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5ccj\" (UniqueName: \"kubernetes.io/projected/d621990b-b3fb-457c-a7b8-0726fa89a5e6-kube-api-access-b5ccj\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.601497 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-config\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.601543 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.602784 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.602813 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.603460 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.604120 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.605294 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-config\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.626491 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5ccj\" (UniqueName: \"kubernetes.io/projected/d621990b-b3fb-457c-a7b8-0726fa89a5e6-kube-api-access-b5ccj\") pod \"dnsmasq-dns-89c5cd4d5-vbzn5\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:22 crc kubenswrapper[4778]: I0312 13:35:22.692538 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:23 crc kubenswrapper[4778]: I0312 13:35:23.188216 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vbzn5"] Mar 12 13:35:23 crc kubenswrapper[4778]: I0312 13:35:23.695602 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:35:23 crc kubenswrapper[4778]: I0312 13:35:23.773582 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:35:23 crc kubenswrapper[4778]: I0312 13:35:23.946710 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jfzqk"] Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.088383 4778 generic.go:334] "Generic (PLEG): container finished" podID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerID="f768634e6581a58404932d5b274b7e499ff8a446926b77d44c652d5c4c0bad66" exitCode=0 Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.088428 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" event={"ID":"d621990b-b3fb-457c-a7b8-0726fa89a5e6","Type":"ContainerDied","Data":"f768634e6581a58404932d5b274b7e499ff8a446926b77d44c652d5c4c0bad66"} Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.088491 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" event={"ID":"d621990b-b3fb-457c-a7b8-0726fa89a5e6","Type":"ContainerStarted","Data":"7c046518ad4ee249311d20eb84f556ea55869944e1e9d121bc2b448648522cec"} Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.336652 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.336892 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="ceilometer-central-agent" containerID="cri-o://61456076e8380a1062d0214a24bdfe0fa640e7ee4451d17b11add3187cfaf9ad" gracePeriod=30 Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.337760 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="proxy-httpd" containerID="cri-o://819e42fd8accff60f320def9e9ec88d7d0b64eac8391a4dca82bd182d50ec648" gracePeriod=30 Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.337922 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="ceilometer-notification-agent" containerID="cri-o://825e28bca3cf084ec7f1951f758972b6df54d50fc49463a251a39ebce8dc6ce1" gracePeriod=30 Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.337927 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="sg-core" containerID="cri-o://5fb44ccb1d5cc41dbcf7c6e5acea797394b81866acaa080b282103d25f4131bf" gracePeriod=30 Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.358291 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": EOF" Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.433161 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:24 crc kubenswrapper[4778]: I0312 13:35:24.659020 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.101556 4778 generic.go:334] "Generic (PLEG): container finished" podID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerID="819e42fd8accff60f320def9e9ec88d7d0b64eac8391a4dca82bd182d50ec648" exitCode=0 Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.101595 4778 generic.go:334] "Generic (PLEG): container finished" podID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerID="5fb44ccb1d5cc41dbcf7c6e5acea797394b81866acaa080b282103d25f4131bf" exitCode=2 Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.101606 4778 generic.go:334] "Generic (PLEG): container finished" podID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerID="825e28bca3cf084ec7f1951f758972b6df54d50fc49463a251a39ebce8dc6ce1" exitCode=0 Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.101616 4778 generic.go:334] "Generic (PLEG): container finished" podID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerID="61456076e8380a1062d0214a24bdfe0fa640e7ee4451d17b11add3187cfaf9ad" exitCode=0 Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.101612 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerDied","Data":"819e42fd8accff60f320def9e9ec88d7d0b64eac8391a4dca82bd182d50ec648"} Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.101729 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerDied","Data":"5fb44ccb1d5cc41dbcf7c6e5acea797394b81866acaa080b282103d25f4131bf"} Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.101742 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerDied","Data":"825e28bca3cf084ec7f1951f758972b6df54d50fc49463a251a39ebce8dc6ce1"} Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.101753 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerDied","Data":"61456076e8380a1062d0214a24bdfe0fa640e7ee4451d17b11add3187cfaf9ad"} Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.105295 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" event={"ID":"d621990b-b3fb-457c-a7b8-0726fa89a5e6","Type":"ContainerStarted","Data":"9226d052c31f98b5c3da17ce19bbc81e718b949c212eab5fa79f7c540fdf830a"} Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.105497 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jfzqk" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="registry-server" containerID="cri-o://8dce37445b314b16965ae024d78bbfd9bf5998d5da6305572acf12733671bc3d" gracePeriod=2 Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.105738 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.105846 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-log" containerID="cri-o://e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e" gracePeriod=30 Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.105918 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-api" containerID="cri-o://cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40" gracePeriod=30 Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.177734 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" podStartSLOduration=3.17771608 podStartE2EDuration="3.17771608s" podCreationTimestamp="2026-03-12 13:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:25.17667391 +0000 UTC m=+1543.625369306" watchObservedRunningTime="2026-03-12 13:35:25.17771608 +0000 UTC m=+1543.626411476" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.545994 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.563236 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-combined-ca-bundle\") pod \"e1488e83-3a44-41ad-aa96-de09b662c16e\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.563323 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-config-data\") pod \"e1488e83-3a44-41ad-aa96-de09b662c16e\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.563370 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82xnp\" (UniqueName: \"kubernetes.io/projected/e1488e83-3a44-41ad-aa96-de09b662c16e-kube-api-access-82xnp\") pod \"e1488e83-3a44-41ad-aa96-de09b662c16e\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.563435 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-log-httpd\") pod \"e1488e83-3a44-41ad-aa96-de09b662c16e\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.563467 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-sg-core-conf-yaml\") pod \"e1488e83-3a44-41ad-aa96-de09b662c16e\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.563483 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-scripts\") pod \"e1488e83-3a44-41ad-aa96-de09b662c16e\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.564011 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-run-httpd\") pod \"e1488e83-3a44-41ad-aa96-de09b662c16e\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.564195 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-ceilometer-tls-certs\") pod \"e1488e83-3a44-41ad-aa96-de09b662c16e\" (UID: \"e1488e83-3a44-41ad-aa96-de09b662c16e\") " Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.564626 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e1488e83-3a44-41ad-aa96-de09b662c16e" (UID: "e1488e83-3a44-41ad-aa96-de09b662c16e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.565156 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e1488e83-3a44-41ad-aa96-de09b662c16e" (UID: "e1488e83-3a44-41ad-aa96-de09b662c16e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.565368 4778 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.565443 4778 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1488e83-3a44-41ad-aa96-de09b662c16e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.570400 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1488e83-3a44-41ad-aa96-de09b662c16e-kube-api-access-82xnp" (OuterVolumeSpecName: "kube-api-access-82xnp") pod "e1488e83-3a44-41ad-aa96-de09b662c16e" (UID: "e1488e83-3a44-41ad-aa96-de09b662c16e"). InnerVolumeSpecName "kube-api-access-82xnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.571694 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-scripts" (OuterVolumeSpecName: "scripts") pod "e1488e83-3a44-41ad-aa96-de09b662c16e" (UID: "e1488e83-3a44-41ad-aa96-de09b662c16e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.624367 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e1488e83-3a44-41ad-aa96-de09b662c16e" (UID: "e1488e83-3a44-41ad-aa96-de09b662c16e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.634695 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e1488e83-3a44-41ad-aa96-de09b662c16e" (UID: "e1488e83-3a44-41ad-aa96-de09b662c16e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.667317 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82xnp\" (UniqueName: \"kubernetes.io/projected/e1488e83-3a44-41ad-aa96-de09b662c16e-kube-api-access-82xnp\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.667578 4778 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.667589 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.667599 4778 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.667767 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1488e83-3a44-41ad-aa96-de09b662c16e" (UID: "e1488e83-3a44-41ad-aa96-de09b662c16e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.696099 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-config-data" (OuterVolumeSpecName: "config-data") pod "e1488e83-3a44-41ad-aa96-de09b662c16e" (UID: "e1488e83-3a44-41ad-aa96-de09b662c16e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.768850 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:25 crc kubenswrapper[4778]: I0312 13:35:25.768889 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1488e83-3a44-41ad-aa96-de09b662c16e-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.127938 4778 generic.go:334] "Generic (PLEG): container finished" podID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerID="8dce37445b314b16965ae024d78bbfd9bf5998d5da6305572acf12733671bc3d" exitCode=0 Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.128124 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfzqk" event={"ID":"1b7d48c4-04cd-481a-976d-19e57a28a1d9","Type":"ContainerDied","Data":"8dce37445b314b16965ae024d78bbfd9bf5998d5da6305572acf12733671bc3d"} Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.128332 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfzqk" event={"ID":"1b7d48c4-04cd-481a-976d-19e57a28a1d9","Type":"ContainerDied","Data":"68397e437c2fb3791ad659ab6abc466e5cb77e5b97a5ba4bc1bb524e525fb6c3"} Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.128350 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68397e437c2fb3791ad659ab6abc466e5cb77e5b97a5ba4bc1bb524e525fb6c3" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.129855 4778 generic.go:334] "Generic (PLEG): container finished" podID="045050c5-d52b-4532-baa1-e7fad66cba96" containerID="e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e" exitCode=143 Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.129900 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"045050c5-d52b-4532-baa1-e7fad66cba96","Type":"ContainerDied","Data":"e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e"} Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.132759 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.132876 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1488e83-3a44-41ad-aa96-de09b662c16e","Type":"ContainerDied","Data":"9b203bf5890b3e4b6703e78b53a4c6b888b8bd4da20a4a2f1d502507cc246b88"} Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.132905 4778 scope.go:117] "RemoveContainer" containerID="819e42fd8accff60f320def9e9ec88d7d0b64eac8391a4dca82bd182d50ec648" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.181265 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.184518 4778 scope.go:117] "RemoveContainer" containerID="5fb44ccb1d5cc41dbcf7c6e5acea797394b81866acaa080b282103d25f4131bf" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.209225 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.231596 4778 scope.go:117] "RemoveContainer" containerID="825e28bca3cf084ec7f1951f758972b6df54d50fc49463a251a39ebce8dc6ce1" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.232010 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.248068 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:26 crc kubenswrapper[4778]: E0312 13:35:26.248554 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="proxy-httpd" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.248576 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="proxy-httpd" Mar 12 13:35:26 crc kubenswrapper[4778]: E0312 13:35:26.248591 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="registry-server" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.248599 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="registry-server" Mar 12 13:35:26 crc kubenswrapper[4778]: E0312 13:35:26.248611 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="extract-content" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249180 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="extract-content" Mar 12 13:35:26 crc kubenswrapper[4778]: E0312 13:35:26.249222 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="sg-core" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249233 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="sg-core" Mar 12 13:35:26 crc kubenswrapper[4778]: E0312 13:35:26.249245 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="ceilometer-notification-agent" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249253 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="ceilometer-notification-agent" Mar 12 13:35:26 crc kubenswrapper[4778]: E0312 13:35:26.249288 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="ceilometer-central-agent" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249296 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="ceilometer-central-agent" Mar 12 13:35:26 crc kubenswrapper[4778]: E0312 13:35:26.249317 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="extract-utilities" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249325 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="extract-utilities" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249569 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="proxy-httpd" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249600 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="ceilometer-notification-agent" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249614 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="sg-core" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249629 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="ceilometer-central-agent" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.249644 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" containerName="registry-server" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.251401 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.264789 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.264978 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.266802 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.271480 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" path="/var/lib/kubelet/pods/e1488e83-3a44-41ad-aa96-de09b662c16e/volumes" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.287570 4778 scope.go:117] "RemoveContainer" containerID="61456076e8380a1062d0214a24bdfe0fa640e7ee4451d17b11add3187cfaf9ad" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.294548 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-utilities\") pod \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.294747 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-catalog-content\") pod \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.294878 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrb7\" (UniqueName: \"kubernetes.io/projected/1b7d48c4-04cd-481a-976d-19e57a28a1d9-kube-api-access-frrb7\") pod \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\" (UID: \"1b7d48c4-04cd-481a-976d-19e57a28a1d9\") " Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295229 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-scripts\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295260 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295286 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-log-httpd\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295318 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295364 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-utilities" (OuterVolumeSpecName: "utilities") pod "1b7d48c4-04cd-481a-976d-19e57a28a1d9" (UID: "1b7d48c4-04cd-481a-976d-19e57a28a1d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295549 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-config-data\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295689 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295774 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fjcm\" (UniqueName: \"kubernetes.io/projected/bbcdf243-9822-4089-9cae-4a46417b6dc0-kube-api-access-2fjcm\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.295882 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-run-httpd\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.296067 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.298672 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.304407 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7d48c4-04cd-481a-976d-19e57a28a1d9-kube-api-access-frrb7" (OuterVolumeSpecName: "kube-api-access-frrb7") pod "1b7d48c4-04cd-481a-976d-19e57a28a1d9" (UID: "1b7d48c4-04cd-481a-976d-19e57a28a1d9"). InnerVolumeSpecName "kube-api-access-frrb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.397732 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.397783 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-log-httpd\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.397810 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.397894 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-config-data\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.397932 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.397959 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fjcm\" (UniqueName: \"kubernetes.io/projected/bbcdf243-9822-4089-9cae-4a46417b6dc0-kube-api-access-2fjcm\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.398000 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-run-httpd\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.398035 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-scripts\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.398094 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frrb7\" (UniqueName: \"kubernetes.io/projected/1b7d48c4-04cd-481a-976d-19e57a28a1d9-kube-api-access-frrb7\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.398515 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-log-httpd\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.398815 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-run-httpd\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.402612 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-scripts\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.403074 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.403639 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.403992 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.409384 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-config-data\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.416992 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fjcm\" (UniqueName: \"kubernetes.io/projected/bbcdf243-9822-4089-9cae-4a46417b6dc0-kube-api-access-2fjcm\") pod \"ceilometer-0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.425531 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b7d48c4-04cd-481a-976d-19e57a28a1d9" (UID: "1b7d48c4-04cd-481a-976d-19e57a28a1d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.500501 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7d48c4-04cd-481a-976d-19e57a28a1d9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.588823 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:35:26 crc kubenswrapper[4778]: I0312 13:35:26.589971 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:27 crc kubenswrapper[4778]: I0312 13:35:27.012721 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:27 crc kubenswrapper[4778]: I0312 13:35:27.145482 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerStarted","Data":"e349198afaff0969683d3154a99e49c5908b20bc0714e59a3832484e545b97dc"} Mar 12 13:35:27 crc kubenswrapper[4778]: I0312 13:35:27.145527 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfzqk" Mar 12 13:35:27 crc kubenswrapper[4778]: I0312 13:35:27.206864 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jfzqk"] Mar 12 13:35:27 crc kubenswrapper[4778]: I0312 13:35:27.218782 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jfzqk"] Mar 12 13:35:28 crc kubenswrapper[4778]: I0312 13:35:28.158693 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerStarted","Data":"f6f2930d9f70388763ddc5deccd561746f3634cc538e9cb6c56ef8628fd4e069"} Mar 12 13:35:28 crc kubenswrapper[4778]: I0312 13:35:28.270827 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7d48c4-04cd-481a-976d-19e57a28a1d9" path="/var/lib/kubelet/pods/1b7d48c4-04cd-481a-976d-19e57a28a1d9/volumes" Mar 12 13:35:28 crc kubenswrapper[4778]: I0312 13:35:28.557750 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:35:28 crc kubenswrapper[4778]: I0312 13:35:28.558073 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.102376 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.150583 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/045050c5-d52b-4532-baa1-e7fad66cba96-logs\") pod \"045050c5-d52b-4532-baa1-e7fad66cba96\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.151067 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-config-data\") pod \"045050c5-d52b-4532-baa1-e7fad66cba96\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.151472 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5p7z\" (UniqueName: \"kubernetes.io/projected/045050c5-d52b-4532-baa1-e7fad66cba96-kube-api-access-r5p7z\") pod \"045050c5-d52b-4532-baa1-e7fad66cba96\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.152001 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/045050c5-d52b-4532-baa1-e7fad66cba96-logs" (OuterVolumeSpecName: "logs") pod "045050c5-d52b-4532-baa1-e7fad66cba96" (UID: "045050c5-d52b-4532-baa1-e7fad66cba96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.156427 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/045050c5-d52b-4532-baa1-e7fad66cba96-kube-api-access-r5p7z" (OuterVolumeSpecName: "kube-api-access-r5p7z") pod "045050c5-d52b-4532-baa1-e7fad66cba96" (UID: "045050c5-d52b-4532-baa1-e7fad66cba96"). InnerVolumeSpecName "kube-api-access-r5p7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.175335 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-combined-ca-bundle\") pod \"045050c5-d52b-4532-baa1-e7fad66cba96\" (UID: \"045050c5-d52b-4532-baa1-e7fad66cba96\") " Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.176384 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/045050c5-d52b-4532-baa1-e7fad66cba96-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.176407 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5p7z\" (UniqueName: \"kubernetes.io/projected/045050c5-d52b-4532-baa1-e7fad66cba96-kube-api-access-r5p7z\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.212735 4778 generic.go:334] "Generic (PLEG): container finished" podID="045050c5-d52b-4532-baa1-e7fad66cba96" containerID="cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40" exitCode=0 Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.212839 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"045050c5-d52b-4532-baa1-e7fad66cba96","Type":"ContainerDied","Data":"cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40"} Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.212868 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"045050c5-d52b-4532-baa1-e7fad66cba96","Type":"ContainerDied","Data":"56c1ee62c0c52d6bc024a53ffbb320a0eb1a79762a1537bb2caf3aafa91e73ce"} Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.212888 4778 scope.go:117] "RemoveContainer" containerID="cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.213023 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.250112 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerStarted","Data":"7114be6621c79d0d604f29d2e6499dffdde39edd34fd34dad40202ec3b0b6eef"} Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.298893 4778 scope.go:117] "RemoveContainer" containerID="e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.372526 4778 scope.go:117] "RemoveContainer" containerID="cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40" Mar 12 13:35:29 crc kubenswrapper[4778]: E0312 13:35:29.381724 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40\": container with ID starting with cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40 not found: ID does not exist" containerID="cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.381775 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40"} err="failed to get container status \"cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40\": rpc error: code = NotFound desc = could not find container \"cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40\": container with ID starting with cdf213319669e5763e5a6be5e5f3c8d41efefcfd15e81ad3ad34fb03c9028e40 not found: ID does not exist" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.381804 4778 scope.go:117] "RemoveContainer" containerID="e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e" Mar 12 13:35:29 crc kubenswrapper[4778]: E0312 13:35:29.386996 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e\": container with ID starting with e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e not found: ID does not exist" containerID="e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.387044 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e"} err="failed to get container status \"e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e\": rpc error: code = NotFound desc = could not find container \"e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e\": container with ID starting with e423b9df02af65c8d19318720fc217bb660a4ff96461ba94e43c2bb5658deb5e not found: ID does not exist" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.433441 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-config-data" (OuterVolumeSpecName: "config-data") pod "045050c5-d52b-4532-baa1-e7fad66cba96" (UID: "045050c5-d52b-4532-baa1-e7fad66cba96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.434170 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.441308 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "045050c5-d52b-4532-baa1-e7fad66cba96" (UID: "045050c5-d52b-4532-baa1-e7fad66cba96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.467943 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.485517 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.485561 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045050c5-d52b-4532-baa1-e7fad66cba96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.546220 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.557927 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.576166 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:29 crc kubenswrapper[4778]: E0312 13:35:29.576692 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-api" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.576715 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-api" Mar 12 13:35:29 crc kubenswrapper[4778]: E0312 13:35:29.576763 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-log" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.576770 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-log" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.577015 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-api" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.577042 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" containerName="nova-api-log" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.578360 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.582583 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.582818 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.582969 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.585837 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.688588 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.688704 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-public-tls-certs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.688751 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.688796 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a186b68c-e472-4507-abc7-0b90ca321ded-logs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.688947 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-config-data\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.689300 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27cs2\" (UniqueName: \"kubernetes.io/projected/a186b68c-e472-4507-abc7-0b90ca321ded-kube-api-access-27cs2\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.791242 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a186b68c-e472-4507-abc7-0b90ca321ded-logs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.791361 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-config-data\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.791414 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27cs2\" (UniqueName: \"kubernetes.io/projected/a186b68c-e472-4507-abc7-0b90ca321ded-kube-api-access-27cs2\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.791450 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.791503 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-public-tls-certs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.792390 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.795245 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a186b68c-e472-4507-abc7-0b90ca321ded-logs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.797314 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.798533 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.798749 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-public-tls-certs\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.799879 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-config-data\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.809222 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27cs2\" (UniqueName: \"kubernetes.io/projected/a186b68c-e472-4507-abc7-0b90ca321ded-kube-api-access-27cs2\") pod \"nova-api-0\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " pod="openstack/nova-api-0" Mar 12 13:35:29 crc kubenswrapper[4778]: I0312 13:35:29.905043 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.289451 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="045050c5-d52b-4532-baa1-e7fad66cba96" path="/var/lib/kubelet/pods/045050c5-d52b-4532-baa1-e7fad66cba96/volumes" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.293723 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerStarted","Data":"8683a8e7549e2bde381c989208b511414ec56e8f866bf125984b6c4530f4d727"} Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.309047 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.424402 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:30 crc kubenswrapper[4778]: W0312 13:35:30.426400 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda186b68c_e472_4507_abc7_0b90ca321ded.slice/crio-f5dd18bc2fa3f41fb9117d4f1e4c37d3b6b9987574f509d03e4a076b1981eb8c WatchSource:0}: Error finding container f5dd18bc2fa3f41fb9117d4f1e4c37d3b6b9987574f509d03e4a076b1981eb8c: Status 404 returned error can't find the container with id f5dd18bc2fa3f41fb9117d4f1e4c37d3b6b9987574f509d03e4a076b1981eb8c Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.541329 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-9xw6b"] Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.543038 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.547024 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.547347 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.549096 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9xw6b"] Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.606518 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjt8\" (UniqueName: \"kubernetes.io/projected/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-kube-api-access-zsjt8\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.606647 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-scripts\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.606900 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-config-data\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.606967 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.708835 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-config-data\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.709099 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.709249 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsjt8\" (UniqueName: \"kubernetes.io/projected/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-kube-api-access-zsjt8\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.709489 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-scripts\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.712543 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.714367 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-scripts\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.714677 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-config-data\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.725701 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsjt8\" (UniqueName: \"kubernetes.io/projected/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-kube-api-access-zsjt8\") pod \"nova-cell1-cell-mapping-9xw6b\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:30 crc kubenswrapper[4778]: I0312 13:35:30.868143 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:31 crc kubenswrapper[4778]: I0312 13:35:31.308912 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a186b68c-e472-4507-abc7-0b90ca321ded","Type":"ContainerStarted","Data":"eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11"} Mar 12 13:35:31 crc kubenswrapper[4778]: I0312 13:35:31.309256 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a186b68c-e472-4507-abc7-0b90ca321ded","Type":"ContainerStarted","Data":"7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397"} Mar 12 13:35:31 crc kubenswrapper[4778]: I0312 13:35:31.309269 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a186b68c-e472-4507-abc7-0b90ca321ded","Type":"ContainerStarted","Data":"f5dd18bc2fa3f41fb9117d4f1e4c37d3b6b9987574f509d03e4a076b1981eb8c"} Mar 12 13:35:31 crc kubenswrapper[4778]: I0312 13:35:31.329063 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9xw6b"] Mar 12 13:35:31 crc kubenswrapper[4778]: I0312 13:35:31.342257 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3422359520000002 podStartE2EDuration="2.342235952s" podCreationTimestamp="2026-03-12 13:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:31.334097801 +0000 UTC m=+1549.782793197" watchObservedRunningTime="2026-03-12 13:35:31.342235952 +0000 UTC m=+1549.790931348" Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.320259 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerStarted","Data":"e8d17472cef396ced990a10ecac98a4762149d480cd7b1355d84ce3ecdbcf8ad"} Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.320852 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.320529 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="proxy-httpd" containerID="cri-o://e8d17472cef396ced990a10ecac98a4762149d480cd7b1355d84ce3ecdbcf8ad" gracePeriod=30 Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.320322 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="ceilometer-central-agent" containerID="cri-o://f6f2930d9f70388763ddc5deccd561746f3634cc538e9cb6c56ef8628fd4e069" gracePeriod=30 Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.320555 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="ceilometer-notification-agent" containerID="cri-o://7114be6621c79d0d604f29d2e6499dffdde39edd34fd34dad40202ec3b0b6eef" gracePeriod=30 Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.320543 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="sg-core" containerID="cri-o://8683a8e7549e2bde381c989208b511414ec56e8f866bf125984b6c4530f4d727" gracePeriod=30 Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.325413 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9xw6b" event={"ID":"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3","Type":"ContainerStarted","Data":"a3547232ddc46df5ded5cc24fff2ec3e7c8bb4fb4c52277d66e27c319ec41995"} Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.325461 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9xw6b" event={"ID":"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3","Type":"ContainerStarted","Data":"fa86e251def50fe26c7890455a492370653fec6579cf29dd2f2d83fb340958c7"} Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.348971 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.639212143 podStartE2EDuration="6.348951328s" podCreationTimestamp="2026-03-12 13:35:26 +0000 UTC" firstStartedPulling="2026-03-12 13:35:27.018234598 +0000 UTC m=+1545.466929994" lastFinishedPulling="2026-03-12 13:35:31.727973783 +0000 UTC m=+1550.176669179" observedRunningTime="2026-03-12 13:35:32.343333068 +0000 UTC m=+1550.792028464" watchObservedRunningTime="2026-03-12 13:35:32.348951328 +0000 UTC m=+1550.797646724" Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.369402 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-9xw6b" podStartSLOduration=2.369382017 podStartE2EDuration="2.369382017s" podCreationTimestamp="2026-03-12 13:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:32.359378594 +0000 UTC m=+1550.808074000" watchObservedRunningTime="2026-03-12 13:35:32.369382017 +0000 UTC m=+1550.818077413" Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.694406 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.782327 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xlfr7"] Mar 12 13:35:32 crc kubenswrapper[4778]: I0312 13:35:32.782842 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" podUID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerName="dnsmasq-dns" containerID="cri-o://e6738e925b347d28a1e722ea04cdc7d88018005b75c56a3dec09b214b5752ae1" gracePeriod=10 Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.339744 4778 generic.go:334] "Generic (PLEG): container finished" podID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerID="e6738e925b347d28a1e722ea04cdc7d88018005b75c56a3dec09b214b5752ae1" exitCode=0 Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.339831 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" event={"ID":"f38c0efe-db9f-4afc-8693-0743c558d74f","Type":"ContainerDied","Data":"e6738e925b347d28a1e722ea04cdc7d88018005b75c56a3dec09b214b5752ae1"} Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.340326 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" event={"ID":"f38c0efe-db9f-4afc-8693-0743c558d74f","Type":"ContainerDied","Data":"a01a33797f0031a4928ccc3b84c316e6cab0e859fc2dd6c0bc9cf5a06332acbb"} Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.340344 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a01a33797f0031a4928ccc3b84c316e6cab0e859fc2dd6c0bc9cf5a06332acbb" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.353587 4778 generic.go:334] "Generic (PLEG): container finished" podID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerID="e8d17472cef396ced990a10ecac98a4762149d480cd7b1355d84ce3ecdbcf8ad" exitCode=0 Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.353651 4778 generic.go:334] "Generic (PLEG): container finished" podID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerID="8683a8e7549e2bde381c989208b511414ec56e8f866bf125984b6c4530f4d727" exitCode=2 Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.353666 4778 generic.go:334] "Generic (PLEG): container finished" podID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerID="7114be6621c79d0d604f29d2e6499dffdde39edd34fd34dad40202ec3b0b6eef" exitCode=0 Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.353672 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerDied","Data":"e8d17472cef396ced990a10ecac98a4762149d480cd7b1355d84ce3ecdbcf8ad"} Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.353725 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerDied","Data":"8683a8e7549e2bde381c989208b511414ec56e8f866bf125984b6c4530f4d727"} Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.353740 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerDied","Data":"7114be6621c79d0d604f29d2e6499dffdde39edd34fd34dad40202ec3b0b6eef"} Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.384651 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.459358 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-svc\") pod \"f38c0efe-db9f-4afc-8693-0743c558d74f\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.459432 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-swift-storage-0\") pod \"f38c0efe-db9f-4afc-8693-0743c558d74f\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.459549 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fvsq\" (UniqueName: \"kubernetes.io/projected/f38c0efe-db9f-4afc-8693-0743c558d74f-kube-api-access-6fvsq\") pod \"f38c0efe-db9f-4afc-8693-0743c558d74f\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.459609 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-sb\") pod \"f38c0efe-db9f-4afc-8693-0743c558d74f\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.459737 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-config\") pod \"f38c0efe-db9f-4afc-8693-0743c558d74f\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.459914 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-nb\") pod \"f38c0efe-db9f-4afc-8693-0743c558d74f\" (UID: \"f38c0efe-db9f-4afc-8693-0743c558d74f\") " Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.468486 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38c0efe-db9f-4afc-8693-0743c558d74f-kube-api-access-6fvsq" (OuterVolumeSpecName: "kube-api-access-6fvsq") pod "f38c0efe-db9f-4afc-8693-0743c558d74f" (UID: "f38c0efe-db9f-4afc-8693-0743c558d74f"). InnerVolumeSpecName "kube-api-access-6fvsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.517610 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f38c0efe-db9f-4afc-8693-0743c558d74f" (UID: "f38c0efe-db9f-4afc-8693-0743c558d74f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.519784 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f38c0efe-db9f-4afc-8693-0743c558d74f" (UID: "f38c0efe-db9f-4afc-8693-0743c558d74f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.531565 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-config" (OuterVolumeSpecName: "config") pod "f38c0efe-db9f-4afc-8693-0743c558d74f" (UID: "f38c0efe-db9f-4afc-8693-0743c558d74f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.533663 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f38c0efe-db9f-4afc-8693-0743c558d74f" (UID: "f38c0efe-db9f-4afc-8693-0743c558d74f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.542770 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f38c0efe-db9f-4afc-8693-0743c558d74f" (UID: "f38c0efe-db9f-4afc-8693-0743c558d74f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.562302 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.562526 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.562587 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fvsq\" (UniqueName: \"kubernetes.io/projected/f38c0efe-db9f-4afc-8693-0743c558d74f-kube-api-access-6fvsq\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.562640 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.562693 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:33 crc kubenswrapper[4778]: I0312 13:35:33.562780 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38c0efe-db9f-4afc-8693-0743c558d74f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.384384 4778 generic.go:334] "Generic (PLEG): container finished" podID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerID="f6f2930d9f70388763ddc5deccd561746f3634cc538e9cb6c56ef8628fd4e069" exitCode=0 Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.384497 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.384532 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerDied","Data":"f6f2930d9f70388763ddc5deccd561746f3634cc538e9cb6c56ef8628fd4e069"} Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.421543 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xlfr7"] Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.430948 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-xlfr7"] Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.672118 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.680276 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-combined-ca-bundle\") pod \"bbcdf243-9822-4089-9cae-4a46417b6dc0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.680355 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-run-httpd\") pod \"bbcdf243-9822-4089-9cae-4a46417b6dc0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.680392 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-config-data\") pod \"bbcdf243-9822-4089-9cae-4a46417b6dc0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.680412 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-ceilometer-tls-certs\") pod \"bbcdf243-9822-4089-9cae-4a46417b6dc0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.680472 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-sg-core-conf-yaml\") pod \"bbcdf243-9822-4089-9cae-4a46417b6dc0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.680505 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-log-httpd\") pod \"bbcdf243-9822-4089-9cae-4a46417b6dc0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.680536 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-scripts\") pod \"bbcdf243-9822-4089-9cae-4a46417b6dc0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.680560 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fjcm\" (UniqueName: \"kubernetes.io/projected/bbcdf243-9822-4089-9cae-4a46417b6dc0-kube-api-access-2fjcm\") pod \"bbcdf243-9822-4089-9cae-4a46417b6dc0\" (UID: \"bbcdf243-9822-4089-9cae-4a46417b6dc0\") " Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.681142 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bbcdf243-9822-4089-9cae-4a46417b6dc0" (UID: "bbcdf243-9822-4089-9cae-4a46417b6dc0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.681862 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bbcdf243-9822-4089-9cae-4a46417b6dc0" (UID: "bbcdf243-9822-4089-9cae-4a46417b6dc0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.685463 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-scripts" (OuterVolumeSpecName: "scripts") pod "bbcdf243-9822-4089-9cae-4a46417b6dc0" (UID: "bbcdf243-9822-4089-9cae-4a46417b6dc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.685559 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcdf243-9822-4089-9cae-4a46417b6dc0-kube-api-access-2fjcm" (OuterVolumeSpecName: "kube-api-access-2fjcm") pod "bbcdf243-9822-4089-9cae-4a46417b6dc0" (UID: "bbcdf243-9822-4089-9cae-4a46417b6dc0"). InnerVolumeSpecName "kube-api-access-2fjcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.737381 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bbcdf243-9822-4089-9cae-4a46417b6dc0" (UID: "bbcdf243-9822-4089-9cae-4a46417b6dc0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.744472 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bbcdf243-9822-4089-9cae-4a46417b6dc0" (UID: "bbcdf243-9822-4089-9cae-4a46417b6dc0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.784994 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fjcm\" (UniqueName: \"kubernetes.io/projected/bbcdf243-9822-4089-9cae-4a46417b6dc0-kube-api-access-2fjcm\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.785032 4778 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.785045 4778 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.785055 4778 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.785066 4778 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bbcdf243-9822-4089-9cae-4a46417b6dc0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.785077 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.786784 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbcdf243-9822-4089-9cae-4a46417b6dc0" (UID: "bbcdf243-9822-4089-9cae-4a46417b6dc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.803393 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-config-data" (OuterVolumeSpecName: "config-data") pod "bbcdf243-9822-4089-9cae-4a46417b6dc0" (UID: "bbcdf243-9822-4089-9cae-4a46417b6dc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.886681 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:34 crc kubenswrapper[4778]: I0312 13:35:34.886713 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcdf243-9822-4089-9cae-4a46417b6dc0-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.396829 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bbcdf243-9822-4089-9cae-4a46417b6dc0","Type":"ContainerDied","Data":"e349198afaff0969683d3154a99e49c5908b20bc0714e59a3832484e545b97dc"} Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.396885 4778 scope.go:117] "RemoveContainer" containerID="e8d17472cef396ced990a10ecac98a4762149d480cd7b1355d84ce3ecdbcf8ad" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.397019 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.440652 4778 scope.go:117] "RemoveContainer" containerID="8683a8e7549e2bde381c989208b511414ec56e8f866bf125984b6c4530f4d727" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.467820 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.474376 4778 scope.go:117] "RemoveContainer" containerID="7114be6621c79d0d604f29d2e6499dffdde39edd34fd34dad40202ec3b0b6eef" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.481978 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.526726 4778 scope.go:117] "RemoveContainer" containerID="f6f2930d9f70388763ddc5deccd561746f3634cc538e9cb6c56ef8628fd4e069" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.539319 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:35 crc kubenswrapper[4778]: E0312 13:35:35.540324 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="ceilometer-central-agent" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.540539 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="ceilometer-central-agent" Mar 12 13:35:35 crc kubenswrapper[4778]: E0312 13:35:35.540568 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerName="dnsmasq-dns" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.540580 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerName="dnsmasq-dns" Mar 12 13:35:35 crc kubenswrapper[4778]: E0312 13:35:35.540594 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="sg-core" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.540603 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="sg-core" Mar 12 13:35:35 crc kubenswrapper[4778]: E0312 13:35:35.540622 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="ceilometer-notification-agent" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.540630 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="ceilometer-notification-agent" Mar 12 13:35:35 crc kubenswrapper[4778]: E0312 13:35:35.540785 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerName="init" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.540846 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerName="init" Mar 12 13:35:35 crc kubenswrapper[4778]: E0312 13:35:35.540919 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="proxy-httpd" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.540966 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="proxy-httpd" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.541307 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="ceilometer-central-agent" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.541338 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="proxy-httpd" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.541380 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="ceilometer-notification-agent" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.541391 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerName="dnsmasq-dns" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.541407 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" containerName="sg-core" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.545007 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.548425 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.548759 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.548721 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.553389 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.724257 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.724645 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-scripts\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.724688 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f1d0355-a73a-4a93-94fb-b439436cf1b1-run-httpd\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.724914 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvzzb\" (UniqueName: \"kubernetes.io/projected/9f1d0355-a73a-4a93-94fb-b439436cf1b1-kube-api-access-tvzzb\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.725068 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-config-data\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.725112 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f1d0355-a73a-4a93-94fb-b439436cf1b1-log-httpd\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.725139 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.725205 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.826897 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f1d0355-a73a-4a93-94fb-b439436cf1b1-run-httpd\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827021 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvzzb\" (UniqueName: \"kubernetes.io/projected/9f1d0355-a73a-4a93-94fb-b439436cf1b1-kube-api-access-tvzzb\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827097 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-config-data\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827132 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f1d0355-a73a-4a93-94fb-b439436cf1b1-log-httpd\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827157 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827203 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827262 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827306 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-scripts\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827368 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f1d0355-a73a-4a93-94fb-b439436cf1b1-run-httpd\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.827731 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f1d0355-a73a-4a93-94fb-b439436cf1b1-log-httpd\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.833983 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.834275 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.834377 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-scripts\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.834966 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-config-data\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.842998 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f1d0355-a73a-4a93-94fb-b439436cf1b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.848570 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvzzb\" (UniqueName: \"kubernetes.io/projected/9f1d0355-a73a-4a93-94fb-b439436cf1b1-kube-api-access-tvzzb\") pod \"ceilometer-0\" (UID: \"9f1d0355-a73a-4a93-94fb-b439436cf1b1\") " pod="openstack/ceilometer-0" Mar 12 13:35:35 crc kubenswrapper[4778]: I0312 13:35:35.864787 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 12 13:35:36 crc kubenswrapper[4778]: I0312 13:35:36.270739 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcdf243-9822-4089-9cae-4a46417b6dc0" path="/var/lib/kubelet/pods/bbcdf243-9822-4089-9cae-4a46417b6dc0/volumes" Mar 12 13:35:36 crc kubenswrapper[4778]: I0312 13:35:36.272407 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f38c0efe-db9f-4afc-8693-0743c558d74f" path="/var/lib/kubelet/pods/f38c0efe-db9f-4afc-8693-0743c558d74f/volumes" Mar 12 13:35:36 crc kubenswrapper[4778]: I0312 13:35:36.387346 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 12 13:35:36 crc kubenswrapper[4778]: I0312 13:35:36.429145 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f1d0355-a73a-4a93-94fb-b439436cf1b1","Type":"ContainerStarted","Data":"b98e3b9d38fb96804c09915afce6d36b14f7bb99c6ece9b42a84918c6e3c063e"} Mar 12 13:35:37 crc kubenswrapper[4778]: I0312 13:35:37.439465 4778 generic.go:334] "Generic (PLEG): container finished" podID="eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" containerID="a3547232ddc46df5ded5cc24fff2ec3e7c8bb4fb4c52277d66e27c319ec41995" exitCode=0 Mar 12 13:35:37 crc kubenswrapper[4778]: I0312 13:35:37.439550 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9xw6b" event={"ID":"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3","Type":"ContainerDied","Data":"a3547232ddc46df5ded5cc24fff2ec3e7c8bb4fb4c52277d66e27c319ec41995"} Mar 12 13:35:37 crc kubenswrapper[4778]: I0312 13:35:37.442157 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f1d0355-a73a-4a93-94fb-b439436cf1b1","Type":"ContainerStarted","Data":"86f3f82c78baa256cc0c6678d7d29c169da5465adc45d999aa4364ff7af57e50"} Mar 12 13:35:38 crc kubenswrapper[4778]: I0312 13:35:38.363158 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-xlfr7" podUID="f38c0efe-db9f-4afc-8693-0743c558d74f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.194:5353: i/o timeout" Mar 12 13:35:38 crc kubenswrapper[4778]: I0312 13:35:38.452425 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f1d0355-a73a-4a93-94fb-b439436cf1b1","Type":"ContainerStarted","Data":"93ca9bbc7604492440314a22e964e6886cf6aa0937e7a02b01d0270505ad8f48"} Mar 12 13:35:38 crc kubenswrapper[4778]: I0312 13:35:38.804856 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:38 crc kubenswrapper[4778]: I0312 13:35:38.994326 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-config-data\") pod \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " Mar 12 13:35:38 crc kubenswrapper[4778]: I0312 13:35:38.994437 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsjt8\" (UniqueName: \"kubernetes.io/projected/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-kube-api-access-zsjt8\") pod \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " Mar 12 13:35:38 crc kubenswrapper[4778]: I0312 13:35:38.994480 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-scripts\") pod \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " Mar 12 13:35:38 crc kubenswrapper[4778]: I0312 13:35:38.994500 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-combined-ca-bundle\") pod \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\" (UID: \"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3\") " Mar 12 13:35:38 crc kubenswrapper[4778]: I0312 13:35:38.998293 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-scripts" (OuterVolumeSpecName: "scripts") pod "eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" (UID: "eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.010714 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-kube-api-access-zsjt8" (OuterVolumeSpecName: "kube-api-access-zsjt8") pod "eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" (UID: "eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3"). InnerVolumeSpecName "kube-api-access-zsjt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.037364 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-config-data" (OuterVolumeSpecName: "config-data") pod "eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" (UID: "eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.046378 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" (UID: "eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.096540 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.096587 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsjt8\" (UniqueName: \"kubernetes.io/projected/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-kube-api-access-zsjt8\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.096598 4778 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-scripts\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.096609 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.462223 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9xw6b" event={"ID":"eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3","Type":"ContainerDied","Data":"fa86e251def50fe26c7890455a492370653fec6579cf29dd2f2d83fb340958c7"} Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.462569 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa86e251def50fe26c7890455a492370653fec6579cf29dd2f2d83fb340958c7" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.462383 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9xw6b" Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.465467 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f1d0355-a73a-4a93-94fb-b439436cf1b1","Type":"ContainerStarted","Data":"9f3f73eb3a2128645ff726985f9081f7cca012825c849d7c5d88697344bd1635"} Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.643833 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.644424 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" containerName="nova-api-log" containerID="cri-o://7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397" gracePeriod=30 Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.645045 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" containerName="nova-api-api" containerID="cri-o://eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11" gracePeriod=30 Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.664491 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.664765 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" containerName="nova-scheduler-scheduler" containerID="cri-o://dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a" gracePeriod=30 Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.719856 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.720276 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-log" containerID="cri-o://ca24f0adae0376e480d75f053859a82c49878f39def1a1831162119084f0dc4d" gracePeriod=30 Mar 12 13:35:39 crc kubenswrapper[4778]: I0312 13:35:39.720494 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-metadata" containerID="cri-o://18451788e6f6468b69f6150e59f0635d08ad6db357c610ae673d149c136dfeb2" gracePeriod=30 Mar 12 13:35:41 crc kubenswrapper[4778]: E0312 13:35:41.780436 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:35:41 crc kubenswrapper[4778]: E0312 13:35:41.833374 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:35:41 crc kubenswrapper[4778]: E0312 13:35:41.883458 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:35:41 crc kubenswrapper[4778]: E0312 13:35:41.883535 4778 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" containerName="nova-scheduler-scheduler" Mar 12 13:35:41 crc kubenswrapper[4778]: I0312 13:35:41.891444 4778 generic.go:334] "Generic (PLEG): container finished" podID="c2700355-e048-4458-b430-8d149a08d624" containerID="ca24f0adae0376e480d75f053859a82c49878f39def1a1831162119084f0dc4d" exitCode=143 Mar 12 13:35:41 crc kubenswrapper[4778]: I0312 13:35:41.891538 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2700355-e048-4458-b430-8d149a08d624","Type":"ContainerDied","Data":"ca24f0adae0376e480d75f053859a82c49878f39def1a1831162119084f0dc4d"} Mar 12 13:35:41 crc kubenswrapper[4778]: I0312 13:35:41.907696 4778 generic.go:334] "Generic (PLEG): container finished" podID="a186b68c-e472-4507-abc7-0b90ca321ded" containerID="7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397" exitCode=143 Mar 12 13:35:41 crc kubenswrapper[4778]: I0312 13:35:41.907817 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a186b68c-e472-4507-abc7-0b90ca321ded","Type":"ContainerDied","Data":"7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397"} Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.599563 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.680074 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-combined-ca-bundle\") pod \"a186b68c-e472-4507-abc7-0b90ca321ded\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.680223 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-public-tls-certs\") pod \"a186b68c-e472-4507-abc7-0b90ca321ded\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.680270 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27cs2\" (UniqueName: \"kubernetes.io/projected/a186b68c-e472-4507-abc7-0b90ca321ded-kube-api-access-27cs2\") pod \"a186b68c-e472-4507-abc7-0b90ca321ded\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.680311 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-config-data\") pod \"a186b68c-e472-4507-abc7-0b90ca321ded\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.680389 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-internal-tls-certs\") pod \"a186b68c-e472-4507-abc7-0b90ca321ded\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.680412 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a186b68c-e472-4507-abc7-0b90ca321ded-logs\") pod \"a186b68c-e472-4507-abc7-0b90ca321ded\" (UID: \"a186b68c-e472-4507-abc7-0b90ca321ded\") " Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.681155 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a186b68c-e472-4507-abc7-0b90ca321ded-logs" (OuterVolumeSpecName: "logs") pod "a186b68c-e472-4507-abc7-0b90ca321ded" (UID: "a186b68c-e472-4507-abc7-0b90ca321ded"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.687244 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a186b68c-e472-4507-abc7-0b90ca321ded-kube-api-access-27cs2" (OuterVolumeSpecName: "kube-api-access-27cs2") pod "a186b68c-e472-4507-abc7-0b90ca321ded" (UID: "a186b68c-e472-4507-abc7-0b90ca321ded"). InnerVolumeSpecName "kube-api-access-27cs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.752489 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a186b68c-e472-4507-abc7-0b90ca321ded" (UID: "a186b68c-e472-4507-abc7-0b90ca321ded"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.754878 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-config-data" (OuterVolumeSpecName: "config-data") pod "a186b68c-e472-4507-abc7-0b90ca321ded" (UID: "a186b68c-e472-4507-abc7-0b90ca321ded"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.758001 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a186b68c-e472-4507-abc7-0b90ca321ded" (UID: "a186b68c-e472-4507-abc7-0b90ca321ded"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.782775 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a186b68c-e472-4507-abc7-0b90ca321ded-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.782817 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.782828 4778 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.782837 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27cs2\" (UniqueName: \"kubernetes.io/projected/a186b68c-e472-4507-abc7-0b90ca321ded-kube-api-access-27cs2\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.782845 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.785558 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a186b68c-e472-4507-abc7-0b90ca321ded" (UID: "a186b68c-e472-4507-abc7-0b90ca321ded"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.884767 4778 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a186b68c-e472-4507-abc7-0b90ca321ded-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.904906 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.921030 4778 generic.go:334] "Generic (PLEG): container finished" podID="5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" containerID="dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a" exitCode=0 Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.921090 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53","Type":"ContainerDied","Data":"dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a"} Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.921115 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53","Type":"ContainerDied","Data":"65fd579a6354c4ba6b71d144e33926dc6f3bb53ede6b497f49fd5c5be99e7ee4"} Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.921133 4778 scope.go:117] "RemoveContainer" containerID="dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.921298 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.935963 4778 generic.go:334] "Generic (PLEG): container finished" podID="a186b68c-e472-4507-abc7-0b90ca321ded" containerID="eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11" exitCode=0 Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.936040 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a186b68c-e472-4507-abc7-0b90ca321ded","Type":"ContainerDied","Data":"eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11"} Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.936074 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a186b68c-e472-4507-abc7-0b90ca321ded","Type":"ContainerDied","Data":"f5dd18bc2fa3f41fb9117d4f1e4c37d3b6b9987574f509d03e4a076b1981eb8c"} Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.936209 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.946852 4778 scope.go:117] "RemoveContainer" containerID="dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a" Mar 12 13:35:42 crc kubenswrapper[4778]: E0312 13:35:42.948008 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a\": container with ID starting with dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a not found: ID does not exist" containerID="dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.948065 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a"} err="failed to get container status \"dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a\": rpc error: code = NotFound desc = could not find container \"dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a\": container with ID starting with dae634b315afa4ad3533dd1e5963155a0f891be96e620c50199777eae097db0a not found: ID does not exist" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.948099 4778 scope.go:117] "RemoveContainer" containerID="eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.981278 4778 scope.go:117] "RemoveContainer" containerID="7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397" Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.986414 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-config-data\") pod \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.986586 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-combined-ca-bundle\") pod \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " Mar 12 13:35:42 crc kubenswrapper[4778]: I0312 13:35:42.986796 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xshlg\" (UniqueName: \"kubernetes.io/projected/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-kube-api-access-xshlg\") pod \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\" (UID: \"5bc6f909-0ff5-4f18-a480-fd8e6cda5e53\") " Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.001240 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-kube-api-access-xshlg" (OuterVolumeSpecName: "kube-api-access-xshlg") pod "5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" (UID: "5bc6f909-0ff5-4f18-a480-fd8e6cda5e53"). InnerVolumeSpecName "kube-api-access-xshlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.039413 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" (UID: "5bc6f909-0ff5-4f18-a480-fd8e6cda5e53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.056632 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-config-data" (OuterVolumeSpecName: "config-data") pod "5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" (UID: "5bc6f909-0ff5-4f18-a480-fd8e6cda5e53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.106517 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.106837 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.106915 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xshlg\" (UniqueName: \"kubernetes.io/projected/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53-kube-api-access-xshlg\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.236514 4778 scope.go:117] "RemoveContainer" containerID="eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11" Mar 12 13:35:43 crc kubenswrapper[4778]: E0312 13:35:43.237101 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11\": container with ID starting with eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11 not found: ID does not exist" containerID="eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.237130 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11"} err="failed to get container status \"eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11\": rpc error: code = NotFound desc = could not find container \"eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11\": container with ID starting with eaf94cdd79eea972e02bd1682954aa51e1069e729c7b0ae6e70982f28a03bc11 not found: ID does not exist" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.237166 4778 scope.go:117] "RemoveContainer" containerID="7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397" Mar 12 13:35:43 crc kubenswrapper[4778]: E0312 13:35:43.237474 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397\": container with ID starting with 7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397 not found: ID does not exist" containerID="7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.237525 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397"} err="failed to get container status \"7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397\": rpc error: code = NotFound desc = could not find container \"7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397\": container with ID starting with 7a2feca3aab730eaaa00a7eae47b95ddd0e61bb831e7003c0a078d8f2460d397 not found: ID does not exist" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.245597 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.256641 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.281710 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:43 crc kubenswrapper[4778]: E0312 13:35:43.282281 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" containerName="nova-manage" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.282298 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" containerName="nova-manage" Mar 12 13:35:43 crc kubenswrapper[4778]: E0312 13:35:43.282329 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" containerName="nova-api-api" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.282340 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" containerName="nova-api-api" Mar 12 13:35:43 crc kubenswrapper[4778]: E0312 13:35:43.282364 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" containerName="nova-scheduler-scheduler" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.282372 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" containerName="nova-scheduler-scheduler" Mar 12 13:35:43 crc kubenswrapper[4778]: E0312 13:35:43.282386 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" containerName="nova-api-log" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.282394 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" containerName="nova-api-log" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.282627 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" containerName="nova-api-log" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.282642 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" containerName="nova-api-api" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.282664 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" containerName="nova-manage" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.282678 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" containerName="nova-scheduler-scheduler" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.284248 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.289243 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.289541 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.289672 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.335498 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.357895 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.372106 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.387559 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.393559 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.395841 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.415783 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.425443 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-public-tls-certs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.425608 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37d70066-6a42-4486-a487-e27b3ab3a61b-logs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.425687 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.427090 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-config-data\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.427183 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rctsf\" (UniqueName: \"kubernetes.io/projected/37d70066-6a42-4486-a487-e27b3ab3a61b-kube-api-access-rctsf\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.427246 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.528985 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.530114 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-config-data\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.530165 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.530251 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rctsf\" (UniqueName: \"kubernetes.io/projected/37d70066-6a42-4486-a487-e27b3ab3a61b-kube-api-access-rctsf\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.530314 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.530489 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrx6\" (UniqueName: \"kubernetes.io/projected/9c07bd9a-becb-4422-a881-5de27a8e8e56-kube-api-access-nxrx6\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.530750 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-public-tls-certs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.530863 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-config-data\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.531067 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37d70066-6a42-4486-a487-e27b3ab3a61b-logs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.531837 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37d70066-6a42-4486-a487-e27b3ab3a61b-logs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.537771 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.539077 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-public-tls-certs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.540299 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.550979 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-config-data\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.558810 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rctsf\" (UniqueName: \"kubernetes.io/projected/37d70066-6a42-4486-a487-e27b3ab3a61b-kube-api-access-rctsf\") pod \"nova-api-0\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.612598 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.632572 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.632659 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrx6\" (UniqueName: \"kubernetes.io/projected/9c07bd9a-becb-4422-a881-5de27a8e8e56-kube-api-access-nxrx6\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.632745 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-config-data\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.637363 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-config-data\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.639043 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.656218 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrx6\" (UniqueName: \"kubernetes.io/projected/9c07bd9a-becb-4422-a881-5de27a8e8e56-kube-api-access-nxrx6\") pod \"nova-scheduler-0\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.717736 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.952718 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f1d0355-a73a-4a93-94fb-b439436cf1b1","Type":"ContainerStarted","Data":"4bdb99a1b06b9bc7b476cd5b6e145e69cc5597c366ddaf40ed2afb2b51af7963"} Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.954680 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 12 13:35:43 crc kubenswrapper[4778]: I0312 13:35:43.985817 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.324860715 podStartE2EDuration="8.985793387s" podCreationTimestamp="2026-03-12 13:35:35 +0000 UTC" firstStartedPulling="2026-03-12 13:35:36.415492889 +0000 UTC m=+1554.864188285" lastFinishedPulling="2026-03-12 13:35:43.076425561 +0000 UTC m=+1561.525120957" observedRunningTime="2026-03-12 13:35:43.980908228 +0000 UTC m=+1562.429603634" watchObservedRunningTime="2026-03-12 13:35:43.985793387 +0000 UTC m=+1562.434488783" Mar 12 13:35:44 crc kubenswrapper[4778]: I0312 13:35:44.096062 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:35:44 crc kubenswrapper[4778]: W0312 13:35:44.102828 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c07bd9a_becb_4422_a881_5de27a8e8e56.slice/crio-7c66f4ab481a2ffe8aa1f978637b76237ffd5a0742d58abdd29d2a665dd400d8 WatchSource:0}: Error finding container 7c66f4ab481a2ffe8aa1f978637b76237ffd5a0742d58abdd29d2a665dd400d8: Status 404 returned error can't find the container with id 7c66f4ab481a2ffe8aa1f978637b76237ffd5a0742d58abdd29d2a665dd400d8 Mar 12 13:35:44 crc kubenswrapper[4778]: I0312 13:35:44.130287 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:35:44 crc kubenswrapper[4778]: I0312 13:35:44.270754 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc6f909-0ff5-4f18-a480-fd8e6cda5e53" path="/var/lib/kubelet/pods/5bc6f909-0ff5-4f18-a480-fd8e6cda5e53/volumes" Mar 12 13:35:44 crc kubenswrapper[4778]: I0312 13:35:44.271909 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a186b68c-e472-4507-abc7-0b90ca321ded" path="/var/lib/kubelet/pods/a186b68c-e472-4507-abc7-0b90ca321ded/volumes" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.005424 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c07bd9a-becb-4422-a881-5de27a8e8e56","Type":"ContainerStarted","Data":"76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549"} Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.005739 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c07bd9a-becb-4422-a881-5de27a8e8e56","Type":"ContainerStarted","Data":"7c66f4ab481a2ffe8aa1f978637b76237ffd5a0742d58abdd29d2a665dd400d8"} Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.023517 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d70066-6a42-4486-a487-e27b3ab3a61b","Type":"ContainerStarted","Data":"6fb0ea63ecde6cfac6694eb778a0f0043874e52ed36561d1c373be870defe193"} Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.023575 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d70066-6a42-4486-a487-e27b3ab3a61b","Type":"ContainerStarted","Data":"2b4aa266ad205b7e0a6d8899547a75fe40c64017eb43d076bced61bb7cc36c19"} Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.023592 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d70066-6a42-4486-a487-e27b3ab3a61b","Type":"ContainerStarted","Data":"c412c8241a59192093777ad55c60d09316d57b1f207c8116b8342fac0e609d85"} Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.062567 4778 generic.go:334] "Generic (PLEG): container finished" podID="c2700355-e048-4458-b430-8d149a08d624" containerID="18451788e6f6468b69f6150e59f0635d08ad6db357c610ae673d149c136dfeb2" exitCode=0 Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.064103 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2700355-e048-4458-b430-8d149a08d624","Type":"ContainerDied","Data":"18451788e6f6468b69f6150e59f0635d08ad6db357c610ae673d149c136dfeb2"} Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.119535 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.119507566 podStartE2EDuration="2.119507566s" podCreationTimestamp="2026-03-12 13:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:45.074623843 +0000 UTC m=+1563.523319239" watchObservedRunningTime="2026-03-12 13:35:45.119507566 +0000 UTC m=+1563.568202962" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.119867 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.119856536 podStartE2EDuration="2.119856536s" podCreationTimestamp="2026-03-12 13:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:45.107376042 +0000 UTC m=+1563.556071438" watchObservedRunningTime="2026-03-12 13:35:45.119856536 +0000 UTC m=+1563.568551932" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.437094 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.607265 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-config-data\") pod \"c2700355-e048-4458-b430-8d149a08d624\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.607715 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjl2\" (UniqueName: \"kubernetes.io/projected/c2700355-e048-4458-b430-8d149a08d624-kube-api-access-jbjl2\") pod \"c2700355-e048-4458-b430-8d149a08d624\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.608090 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-nova-metadata-tls-certs\") pod \"c2700355-e048-4458-b430-8d149a08d624\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.608284 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-combined-ca-bundle\") pod \"c2700355-e048-4458-b430-8d149a08d624\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.608375 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2700355-e048-4458-b430-8d149a08d624-logs\") pod \"c2700355-e048-4458-b430-8d149a08d624\" (UID: \"c2700355-e048-4458-b430-8d149a08d624\") " Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.609428 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2700355-e048-4458-b430-8d149a08d624-logs" (OuterVolumeSpecName: "logs") pod "c2700355-e048-4458-b430-8d149a08d624" (UID: "c2700355-e048-4458-b430-8d149a08d624"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.614374 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2700355-e048-4458-b430-8d149a08d624-kube-api-access-jbjl2" (OuterVolumeSpecName: "kube-api-access-jbjl2") pod "c2700355-e048-4458-b430-8d149a08d624" (UID: "c2700355-e048-4458-b430-8d149a08d624"). InnerVolumeSpecName "kube-api-access-jbjl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.652438 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-config-data" (OuterVolumeSpecName: "config-data") pod "c2700355-e048-4458-b430-8d149a08d624" (UID: "c2700355-e048-4458-b430-8d149a08d624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.663101 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2700355-e048-4458-b430-8d149a08d624" (UID: "c2700355-e048-4458-b430-8d149a08d624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.683375 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c2700355-e048-4458-b430-8d149a08d624" (UID: "c2700355-e048-4458-b430-8d149a08d624"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.710836 4778 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.710873 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.710882 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2700355-e048-4458-b430-8d149a08d624-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.710893 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2700355-e048-4458-b430-8d149a08d624-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:45 crc kubenswrapper[4778]: I0312 13:35:45.710901 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjl2\" (UniqueName: \"kubernetes.io/projected/c2700355-e048-4458-b430-8d149a08d624-kube-api-access-jbjl2\") on node \"crc\" DevicePath \"\"" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.074984 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.077389 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2700355-e048-4458-b430-8d149a08d624","Type":"ContainerDied","Data":"4cdf23596db0a1e92716b41df3f9c56dd37f21ba73a9653782acae39684ee3dd"} Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.077463 4778 scope.go:117] "RemoveContainer" containerID="18451788e6f6468b69f6150e59f0635d08ad6db357c610ae673d149c136dfeb2" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.114637 4778 scope.go:117] "RemoveContainer" containerID="ca24f0adae0376e480d75f053859a82c49878f39def1a1831162119084f0dc4d" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.132515 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.146270 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.157435 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:35:46 crc kubenswrapper[4778]: E0312 13:35:46.157914 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-metadata" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.157943 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-metadata" Mar 12 13:35:46 crc kubenswrapper[4778]: E0312 13:35:46.157974 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-log" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.157983 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-log" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.158263 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-metadata" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.158296 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2700355-e048-4458-b430-8d149a08d624" containerName="nova-metadata-log" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.159599 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.164316 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.164499 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.191418 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.265884 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2700355-e048-4458-b430-8d149a08d624" path="/var/lib/kubelet/pods/c2700355-e048-4458-b430-8d149a08d624/volumes" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.322127 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5dsf\" (UniqueName: \"kubernetes.io/projected/e5f2bac2-0571-44d8-ba4e-c006600506a5-kube-api-access-d5dsf\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.322187 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.322350 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f2bac2-0571-44d8-ba4e-c006600506a5-logs\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.322369 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.322772 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-config-data\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.424518 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-config-data\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.424579 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5dsf\" (UniqueName: \"kubernetes.io/projected/e5f2bac2-0571-44d8-ba4e-c006600506a5-kube-api-access-d5dsf\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.424605 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.424678 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f2bac2-0571-44d8-ba4e-c006600506a5-logs\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.424696 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.425784 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f2bac2-0571-44d8-ba4e-c006600506a5-logs\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.432746 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.433629 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.440282 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-config-data\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.443922 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5dsf\" (UniqueName: \"kubernetes.io/projected/e5f2bac2-0571-44d8-ba4e-c006600506a5-kube-api-access-d5dsf\") pod \"nova-metadata-0\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.484480 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:35:46 crc kubenswrapper[4778]: I0312 13:35:46.929579 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:35:47 crc kubenswrapper[4778]: I0312 13:35:47.093857 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f2bac2-0571-44d8-ba4e-c006600506a5","Type":"ContainerStarted","Data":"63ddaf1ae0c88152b73f19a7bbf611a71857e4349ca72eb4c73d9d4e815e1b3c"} Mar 12 13:35:48 crc kubenswrapper[4778]: I0312 13:35:48.104896 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f2bac2-0571-44d8-ba4e-c006600506a5","Type":"ContainerStarted","Data":"1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed"} Mar 12 13:35:48 crc kubenswrapper[4778]: I0312 13:35:48.105060 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f2bac2-0571-44d8-ba4e-c006600506a5","Type":"ContainerStarted","Data":"3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c"} Mar 12 13:35:48 crc kubenswrapper[4778]: I0312 13:35:48.137894 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.137869864 podStartE2EDuration="2.137869864s" podCreationTimestamp="2026-03-12 13:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:35:48.129789655 +0000 UTC m=+1566.578485061" watchObservedRunningTime="2026-03-12 13:35:48.137869864 +0000 UTC m=+1566.586565260" Mar 12 13:35:48 crc kubenswrapper[4778]: I0312 13:35:48.719357 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 13:35:53 crc kubenswrapper[4778]: I0312 13:35:53.613342 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 13:35:53 crc kubenswrapper[4778]: I0312 13:35:53.613968 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 13:35:53 crc kubenswrapper[4778]: I0312 13:35:53.718777 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 13:35:53 crc kubenswrapper[4778]: I0312 13:35:53.746174 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 13:35:54 crc kubenswrapper[4778]: I0312 13:35:54.202708 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 13:35:54 crc kubenswrapper[4778]: I0312 13:35:54.630456 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:35:54 crc kubenswrapper[4778]: I0312 13:35:54.630479 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:35:55 crc kubenswrapper[4778]: I0312 13:35:55.448377 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e1488e83-3a44-41ad-aa96-de09b662c16e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": dial tcp 10.217.0.198:3000: i/o timeout" Mar 12 13:35:56 crc kubenswrapper[4778]: I0312 13:35:56.485270 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 13:35:56 crc kubenswrapper[4778]: I0312 13:35:56.485595 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 12 13:35:57 crc kubenswrapper[4778]: I0312 13:35:57.497377 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:35:57 crc kubenswrapper[4778]: I0312 13:35:57.497381 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:35:58 crc kubenswrapper[4778]: I0312 13:35:58.557913 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:35:58 crc kubenswrapper[4778]: I0312 13:35:58.557961 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.157016 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555376-2bdpv"] Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.158690 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555376-2bdpv" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.162255 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.162560 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.163867 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.168434 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555376-2bdpv"] Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.309408 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvvbv\" (UniqueName: \"kubernetes.io/projected/1f74db3c-fec4-452d-bfd6-8db9f766e0bc-kube-api-access-mvvbv\") pod \"auto-csr-approver-29555376-2bdpv\" (UID: \"1f74db3c-fec4-452d-bfd6-8db9f766e0bc\") " pod="openshift-infra/auto-csr-approver-29555376-2bdpv" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.411590 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvvbv\" (UniqueName: \"kubernetes.io/projected/1f74db3c-fec4-452d-bfd6-8db9f766e0bc-kube-api-access-mvvbv\") pod \"auto-csr-approver-29555376-2bdpv\" (UID: \"1f74db3c-fec4-452d-bfd6-8db9f766e0bc\") " pod="openshift-infra/auto-csr-approver-29555376-2bdpv" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.456541 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvvbv\" (UniqueName: \"kubernetes.io/projected/1f74db3c-fec4-452d-bfd6-8db9f766e0bc-kube-api-access-mvvbv\") pod \"auto-csr-approver-29555376-2bdpv\" (UID: \"1f74db3c-fec4-452d-bfd6-8db9f766e0bc\") " pod="openshift-infra/auto-csr-approver-29555376-2bdpv" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.478825 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555376-2bdpv" Mar 12 13:36:00 crc kubenswrapper[4778]: I0312 13:36:00.928567 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555376-2bdpv"] Mar 12 13:36:01 crc kubenswrapper[4778]: I0312 13:36:01.239360 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555376-2bdpv" event={"ID":"1f74db3c-fec4-452d-bfd6-8db9f766e0bc","Type":"ContainerStarted","Data":"211e7ee34a5dd3127d5e72bf488445d32209f3b42800d63aa54c4f05c30abbd1"} Mar 12 13:36:01 crc kubenswrapper[4778]: I0312 13:36:01.613148 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 13:36:01 crc kubenswrapper[4778]: I0312 13:36:01.614252 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 13:36:03 crc kubenswrapper[4778]: I0312 13:36:03.262309 4778 generic.go:334] "Generic (PLEG): container finished" podID="1f74db3c-fec4-452d-bfd6-8db9f766e0bc" containerID="0e8b3287f4617d49763a5e13085485c7f1faa35a7b545d67c3db4b7ac7a3c06b" exitCode=0 Mar 12 13:36:03 crc kubenswrapper[4778]: I0312 13:36:03.263293 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555376-2bdpv" event={"ID":"1f74db3c-fec4-452d-bfd6-8db9f766e0bc","Type":"ContainerDied","Data":"0e8b3287f4617d49763a5e13085485c7f1faa35a7b545d67c3db4b7ac7a3c06b"} Mar 12 13:36:03 crc kubenswrapper[4778]: I0312 13:36:03.620501 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 13:36:03 crc kubenswrapper[4778]: I0312 13:36:03.621218 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 13:36:03 crc kubenswrapper[4778]: I0312 13:36:03.635301 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 13:36:03 crc kubenswrapper[4778]: I0312 13:36:03.635638 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 13:36:04 crc kubenswrapper[4778]: I0312 13:36:04.485571 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 13:36:04 crc kubenswrapper[4778]: I0312 13:36:04.485631 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 12 13:36:04 crc kubenswrapper[4778]: I0312 13:36:04.632921 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555376-2bdpv" Mar 12 13:36:04 crc kubenswrapper[4778]: I0312 13:36:04.814653 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvvbv\" (UniqueName: \"kubernetes.io/projected/1f74db3c-fec4-452d-bfd6-8db9f766e0bc-kube-api-access-mvvbv\") pod \"1f74db3c-fec4-452d-bfd6-8db9f766e0bc\" (UID: \"1f74db3c-fec4-452d-bfd6-8db9f766e0bc\") " Mar 12 13:36:04 crc kubenswrapper[4778]: I0312 13:36:04.861568 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f74db3c-fec4-452d-bfd6-8db9f766e0bc-kube-api-access-mvvbv" (OuterVolumeSpecName: "kube-api-access-mvvbv") pod "1f74db3c-fec4-452d-bfd6-8db9f766e0bc" (UID: "1f74db3c-fec4-452d-bfd6-8db9f766e0bc"). InnerVolumeSpecName "kube-api-access-mvvbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:04 crc kubenswrapper[4778]: I0312 13:36:04.916583 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvvbv\" (UniqueName: \"kubernetes.io/projected/1f74db3c-fec4-452d-bfd6-8db9f766e0bc-kube-api-access-mvvbv\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:05 crc kubenswrapper[4778]: I0312 13:36:05.289841 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555376-2bdpv" Mar 12 13:36:05 crc kubenswrapper[4778]: I0312 13:36:05.290046 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555376-2bdpv" event={"ID":"1f74db3c-fec4-452d-bfd6-8db9f766e0bc","Type":"ContainerDied","Data":"211e7ee34a5dd3127d5e72bf488445d32209f3b42800d63aa54c4f05c30abbd1"} Mar 12 13:36:05 crc kubenswrapper[4778]: I0312 13:36:05.290381 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="211e7ee34a5dd3127d5e72bf488445d32209f3b42800d63aa54c4f05c30abbd1" Mar 12 13:36:05 crc kubenswrapper[4778]: I0312 13:36:05.716054 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555370-6zrgd"] Mar 12 13:36:05 crc kubenswrapper[4778]: I0312 13:36:05.732496 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555370-6zrgd"] Mar 12 13:36:05 crc kubenswrapper[4778]: I0312 13:36:05.890160 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 12 13:36:06 crc kubenswrapper[4778]: I0312 13:36:06.262792 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c682acb-240b-44d4-a2be-0ea0cd913af1" path="/var/lib/kubelet/pods/1c682acb-240b-44d4-a2be-0ea0cd913af1/volumes" Mar 12 13:36:06 crc kubenswrapper[4778]: I0312 13:36:06.494543 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 13:36:06 crc kubenswrapper[4778]: I0312 13:36:06.496863 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 12 13:36:06 crc kubenswrapper[4778]: I0312 13:36:06.500867 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 13:36:07 crc kubenswrapper[4778]: I0312 13:36:07.312828 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 12 13:36:15 crc kubenswrapper[4778]: I0312 13:36:15.711863 4778 scope.go:117] "RemoveContainer" containerID="8328194fef169053b3f39722ffd3e2d940869363b5142050b8e768ed01fab0c0" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.764543 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69b6dc4885-z4h9m"] Mar 12 13:36:23 crc kubenswrapper[4778]: E0312 13:36:23.766082 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f74db3c-fec4-452d-bfd6-8db9f766e0bc" containerName="oc" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.766109 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f74db3c-fec4-452d-bfd6-8db9f766e0bc" containerName="oc" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.766424 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f74db3c-fec4-452d-bfd6-8db9f766e0bc" containerName="oc" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.767948 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.786979 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69b6dc4885-z4h9m"] Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.812195 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-scripts\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.812274 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-public-tls-certs\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.812308 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-fernet-keys\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.812335 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-combined-ca-bundle\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.812413 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-credential-keys\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.812473 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9h9j\" (UniqueName: \"kubernetes.io/projected/16dea17b-eaa4-4bbf-8895-c077b3e28d66-kube-api-access-d9h9j\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.812946 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-internal-tls-certs\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.812994 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-config-data\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.914982 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-internal-tls-certs\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.915034 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-config-data\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.915070 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-scripts\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.915104 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-public-tls-certs\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.915130 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-fernet-keys\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.915150 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-combined-ca-bundle\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.915197 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-credential-keys\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.915244 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9h9j\" (UniqueName: \"kubernetes.io/projected/16dea17b-eaa4-4bbf-8895-c077b3e28d66-kube-api-access-d9h9j\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.921646 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-scripts\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.921749 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-fernet-keys\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.922100 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-credential-keys\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.922104 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-combined-ca-bundle\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.923219 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-public-tls-certs\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.925615 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-internal-tls-certs\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.930475 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16dea17b-eaa4-4bbf-8895-c077b3e28d66-config-data\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:23 crc kubenswrapper[4778]: I0312 13:36:23.934438 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9h9j\" (UniqueName: \"kubernetes.io/projected/16dea17b-eaa4-4bbf-8895-c077b3e28d66-kube-api-access-d9h9j\") pod \"keystone-69b6dc4885-z4h9m\" (UID: \"16dea17b-eaa4-4bbf-8895-c077b3e28d66\") " pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.103077 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.122789 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-769c65dfd5-frvxx"] Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.132984 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.165138 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-769c65dfd5-frvxx"] Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.189969 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-566c4d5fc-zx97x"] Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.192351 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.222405 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kj5b\" (UniqueName: \"kubernetes.io/projected/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-kube-api-access-4kj5b\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.222769 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-combined-ca-bundle\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.222812 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-httpd-config\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.222838 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-ovndb-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.222890 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-internal-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.222986 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-ovndb-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.223013 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-combined-ca-bundle\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.223047 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqjss\" (UniqueName: \"kubernetes.io/projected/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-kube-api-access-nqjss\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.223163 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-config\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.223251 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-internal-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.223284 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-public-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.223316 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-config\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.223354 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-httpd-config\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.223407 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-public-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.234135 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-566c4d5fc-zx97x"] Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325432 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-config\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325518 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-httpd-config\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325566 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-public-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325600 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kj5b\" (UniqueName: \"kubernetes.io/projected/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-kube-api-access-4kj5b\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325628 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-combined-ca-bundle\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325666 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-ovndb-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325693 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-httpd-config\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325736 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-internal-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325825 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-ovndb-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325852 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-combined-ca-bundle\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.325885 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqjss\" (UniqueName: \"kubernetes.io/projected/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-kube-api-access-nqjss\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.326033 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-config\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.326148 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-internal-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.326203 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-public-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.332703 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-config\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.334604 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-ovndb-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.342111 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-httpd-config\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.343850 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-public-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.343877 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-httpd-config\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.346518 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kj5b\" (UniqueName: \"kubernetes.io/projected/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-kube-api-access-4kj5b\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.346542 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-internal-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.347176 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-combined-ca-bundle\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.353466 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-ovndb-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.355493 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-public-tls-certs\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.361904 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqjss\" (UniqueName: \"kubernetes.io/projected/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-kube-api-access-nqjss\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.372923 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-internal-tls-certs\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.375072 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a67d4b7-d8eb-40f4-b51d-62e92c6042c1-combined-ca-bundle\") pod \"neutron-566c4d5fc-zx97x\" (UID: \"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1\") " pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.400670 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-config\") pod \"neutron-769c65dfd5-frvxx\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.497512 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.497839 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="7733a48b-2bc4-4372-a222-37bb8ea04b6d" containerName="nova-cell0-conductor-conductor" containerID="cri-o://7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731" gracePeriod=30 Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.519969 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.520380 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9c07bd9a-becb-4422-a881-5de27a8e8e56" containerName="nova-scheduler-scheduler" containerID="cri-o://76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549" gracePeriod=30 Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.569974 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.570271 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-log" containerID="cri-o://2b4aa266ad205b7e0a6d8899547a75fe40c64017eb43d076bced61bb7cc36c19" gracePeriod=30 Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.570410 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-api" containerID="cri-o://6fb0ea63ecde6cfac6694eb778a0f0043874e52ed36561d1c373be870defe193" gracePeriod=30 Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.595847 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:24 crc kubenswrapper[4778]: I0312 13:36:24.597470 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.607775 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.641978 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69b6dc4885-z4h9m"] Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.734226 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.745377 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-log" containerID="cri-o://3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c" gracePeriod=30 Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.745858 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-metadata" containerID="cri-o://1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed" gracePeriod=30 Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.793323 4778 generic.go:334] "Generic (PLEG): container finished" podID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerID="2b4aa266ad205b7e0a6d8899547a75fe40c64017eb43d076bced61bb7cc36c19" exitCode=143 Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.793381 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d70066-6a42-4486-a487-e27b3ab3a61b","Type":"ContainerDied","Data":"2b4aa266ad205b7e0a6d8899547a75fe40c64017eb43d076bced61bb7cc36c19"} Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.803336 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1"] Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.830591 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.892829 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1"] Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.928164 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.936458 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.940051 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-metadata-config-data" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.968459 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5z5t\" (UniqueName: \"kubernetes.io/projected/f0341d80-4327-4c9e-bc11-0cddbc6eab66-kube-api-access-t5z5t\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.978241 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-internal-tls-certs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.978350 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0341d80-4327-4c9e-bc11-0cddbc6eab66-logs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.978410 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-public-tls-certs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.978442 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.978679 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-config-data\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:25 crc kubenswrapper[4778]: I0312 13:36:25.985030 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.022198 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6796c46585-tk69s"] Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.081863 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-public-tls-certs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.081927 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a29f7b0-d851-4967-802b-91e301ce82f2-logs\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.081962 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.082045 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-config-data\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.082091 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-config-data\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.082163 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5z5t\" (UniqueName: \"kubernetes.io/projected/f0341d80-4327-4c9e-bc11-0cddbc6eab66-kube-api-access-t5z5t\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.082397 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-combined-ca-bundle\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.082454 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-internal-tls-certs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.082495 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4cqk\" (UniqueName: \"kubernetes.io/projected/5a29f7b0-d851-4967-802b-91e301ce82f2-kube-api-access-m4cqk\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.082547 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0341d80-4327-4c9e-bc11-0cddbc6eab66-logs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.084500 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0341d80-4327-4c9e-bc11-0cddbc6eab66-logs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.103020 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.112807 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-internal-tls-certs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.134564 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-config-data\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.134859 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-public-tls-certs\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.153779 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5z5t\" (UniqueName: \"kubernetes.io/projected/f0341d80-4327-4c9e-bc11-0cddbc6eab66-kube-api-access-t5z5t\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.188097 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0341d80-4327-4c9e-bc11-0cddbc6eab66-combined-ca-bundle\") pod \"nova-api-1\" (UID: \"f0341d80-4327-4c9e-bc11-0cddbc6eab66\") " pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.189426 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-combined-ca-bundle\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.192097 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-config\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.192323 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4cqk\" (UniqueName: \"kubernetes.io/projected/5a29f7b0-d851-4967-802b-91e301ce82f2-kube-api-access-m4cqk\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.193369 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a29f7b0-d851-4967-802b-91e301ce82f2-logs\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.193489 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-svc\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.202126 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a29f7b0-d851-4967-802b-91e301ce82f2-logs\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.224765 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9pp\" (UniqueName: \"kubernetes.io/projected/7be5cd74-51aa-4be4-bee0-bcd4414e988c-kube-api-access-mc9pp\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.225010 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-config-data\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.225299 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-nb\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.225392 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-sb\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.225515 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-swift-storage-0\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.246067 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4cqk\" (UniqueName: \"kubernetes.io/projected/5a29f7b0-d851-4967-802b-91e301ce82f2-kube-api-access-m4cqk\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.246896 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.254434 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-config-data\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.292431 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6796c46585-tk69s"] Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.306040 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-combined-ca-bundle\") pod \"nova-cell1-metadata-0\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.330926 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-config\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.331371 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.333115 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-svc\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.334031 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc9pp\" (UniqueName: \"kubernetes.io/projected/7be5cd74-51aa-4be4-bee0-bcd4414e988c-kube-api-access-mc9pp\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.334520 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-nb\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.334573 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-sb\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.334621 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-swift-storage-0\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.336291 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-swift-storage-0\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.342801 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-nb\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.343409 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-sb\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.345019 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-config\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.373028 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-svc\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.376929 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc9pp\" (UniqueName: \"kubernetes.io/projected/7be5cd74-51aa-4be4-bee0-bcd4414e988c-kube-api-access-mc9pp\") pod \"dnsmasq-dns-6796c46585-tk69s\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.396237 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6796c46585-tk69s"] Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.396970 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.439484 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f59c7d6f9-7f6bj"] Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.441336 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.477123 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f59c7d6f9-7f6bj"] Mar 12 13:36:26 crc kubenswrapper[4778]: E0312 13:36:26.534036 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 12 13:36:26 crc kubenswrapper[4778]: E0312 13:36:26.537282 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.539494 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdn6k\" (UniqueName: \"kubernetes.io/projected/a677b5ba-f5d3-4310-ab6d-af0505e82a00-kube-api-access-tdn6k\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.539549 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-nb\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.539593 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-swift-storage-0\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.539628 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-sb\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.540049 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-config\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.540224 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-svc\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: E0312 13:36:26.547662 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 12 13:36:26 crc kubenswrapper[4778]: E0312 13:36:26.547750 4778 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7733a48b-2bc4-4372-a222-37bb8ea04b6d" containerName="nova-cell0-conductor-conductor" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.647990 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdn6k\" (UniqueName: \"kubernetes.io/projected/a677b5ba-f5d3-4310-ab6d-af0505e82a00-kube-api-access-tdn6k\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.648059 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-nb\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.648119 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-swift-storage-0\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.648145 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-sb\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.648243 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-config\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.648298 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-svc\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.650718 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-nb\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.650831 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-sb\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.649442 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-svc\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.651031 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-config\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.651148 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-swift-storage-0\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.684885 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdn6k\" (UniqueName: \"kubernetes.io/projected/a677b5ba-f5d3-4310-ab6d-af0505e82a00-kube-api-access-tdn6k\") pod \"dnsmasq-dns-6f59c7d6f9-7f6bj\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.689322 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-769c65dfd5-frvxx"] Mar 12 13:36:26 crc kubenswrapper[4778]: W0312 13:36:26.703067 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e1d8894_7234_40d0_b42a_9d7ab1ce638a.slice/crio-e2d775202948449d32b3e6f8c66299f17943aeca0f3f57c7b82f6f8283ff7095 WatchSource:0}: Error finding container e2d775202948449d32b3e6f8c66299f17943aeca0f3f57c7b82f6f8283ff7095: Status 404 returned error can't find the container with id e2d775202948449d32b3e6f8c66299f17943aeca0f3f57c7b82f6f8283ff7095 Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.764603 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.792604 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-566c4d5fc-zx97x"] Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.836027 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69b6dc4885-z4h9m" event={"ID":"16dea17b-eaa4-4bbf-8895-c077b3e28d66","Type":"ContainerStarted","Data":"6fcce949c75d6468b3b14633abe6749e40c68a0baec2473ee7c3eb866c1fae55"} Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.836408 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69b6dc4885-z4h9m" event={"ID":"16dea17b-eaa4-4bbf-8895-c077b3e28d66","Type":"ContainerStarted","Data":"29a78f7251ac4b00bd393718f7a5b34295387492a9278080f3668886800c6546"} Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.839025 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.844260 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566c4d5fc-zx97x" event={"ID":"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1","Type":"ContainerStarted","Data":"6a6275efc50e33030547560ccee86be3358a86873c23828c4bdef28e7387e6ec"} Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.854023 4778 generic.go:334] "Generic (PLEG): container finished" podID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerID="3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c" exitCode=143 Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.854145 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f2bac2-0571-44d8-ba4e-c006600506a5","Type":"ContainerDied","Data":"3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c"} Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.856196 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="e28e8bc2-4b60-447e-b78e-99f53f0559e9" containerName="nova-cell1-conductor-conductor" containerID="cri-o://17f6ecc58bfeead13bd408fa3389fcd5b9ea0127020d364f507d2277de0d4c6f" gracePeriod=30 Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.856814 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-frvxx" event={"ID":"2e1d8894-7234-40d0-b42a-9d7ab1ce638a","Type":"ContainerStarted","Data":"e2d775202948449d32b3e6f8c66299f17943aeca0f3f57c7b82f6f8283ff7095"} Mar 12 13:36:26 crc kubenswrapper[4778]: I0312 13:36:26.889925 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69b6dc4885-z4h9m" podStartSLOduration=3.889903946 podStartE2EDuration="3.889903946s" podCreationTimestamp="2026-03-12 13:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:26.888546048 +0000 UTC m=+1605.337241454" watchObservedRunningTime="2026-03-12 13:36:26.889903946 +0000 UTC m=+1605.338599342" Mar 12 13:36:27 crc kubenswrapper[4778]: I0312 13:36:27.011147 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1"] Mar 12 13:36:27 crc kubenswrapper[4778]: I0312 13:36:27.386831 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6796c46585-tk69s"] Mar 12 13:36:27 crc kubenswrapper[4778]: I0312 13:36:27.418856 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:27 crc kubenswrapper[4778]: W0312 13:36:27.481332 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be5cd74_51aa_4be4_bee0_bcd4414e988c.slice/crio-9611fdd9f50898df9bc39508d9d61ff6e00eab70329522bacb24a65cad5f58f2 WatchSource:0}: Error finding container 9611fdd9f50898df9bc39508d9d61ff6e00eab70329522bacb24a65cad5f58f2: Status 404 returned error can't find the container with id 9611fdd9f50898df9bc39508d9d61ff6e00eab70329522bacb24a65cad5f58f2 Mar 12 13:36:27 crc kubenswrapper[4778]: I0312 13:36:27.616419 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f59c7d6f9-7f6bj"] Mar 12 13:36:27 crc kubenswrapper[4778]: E0312 13:36:27.702471 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7733a48b_2bc4_4372_a222_37bb8ea04b6d.slice/crio-7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7733a48b_2bc4_4372_a222_37bb8ea04b6d.slice/crio-conmon-7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731.scope\": RecentStats: unable to find data in memory cache]" Mar 12 13:36:27 crc kubenswrapper[4778]: I0312 13:36:27.939374 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:27 crc kubenswrapper[4778]: I0312 13:36:27.940963 4778 generic.go:334] "Generic (PLEG): container finished" podID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerID="6fb0ea63ecde6cfac6694eb778a0f0043874e52ed36561d1c373be870defe193" exitCode=0 Mar 12 13:36:27 crc kubenswrapper[4778]: I0312 13:36:27.941091 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d70066-6a42-4486-a487-e27b3ab3a61b","Type":"ContainerDied","Data":"6fb0ea63ecde6cfac6694eb778a0f0043874e52ed36561d1c373be870defe193"} Mar 12 13:36:27 crc kubenswrapper[4778]: I0312 13:36:27.974680 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6796c46585-tk69s" event={"ID":"7be5cd74-51aa-4be4-bee0-bcd4414e988c","Type":"ContainerStarted","Data":"9611fdd9f50898df9bc39508d9d61ff6e00eab70329522bacb24a65cad5f58f2"} Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.005907 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xht95\" (UniqueName: \"kubernetes.io/projected/7733a48b-2bc4-4372-a222-37bb8ea04b6d-kube-api-access-xht95\") pod \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.005998 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-combined-ca-bundle\") pod \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.006203 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-config-data\") pod \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\" (UID: \"7733a48b-2bc4-4372-a222-37bb8ea04b6d\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.024574 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7733a48b-2bc4-4372-a222-37bb8ea04b6d-kube-api-access-xht95" (OuterVolumeSpecName: "kube-api-access-xht95") pod "7733a48b-2bc4-4372-a222-37bb8ea04b6d" (UID: "7733a48b-2bc4-4372-a222-37bb8ea04b6d"). InnerVolumeSpecName "kube-api-access-xht95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.052121 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"5a29f7b0-d851-4967-802b-91e301ce82f2","Type":"ContainerStarted","Data":"7686b13a63fb7303c82944b69b0f74a27bb498a86b8a8900db3aa379dab6a697"} Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.081236 4778 generic.go:334] "Generic (PLEG): container finished" podID="7733a48b-2bc4-4372-a222-37bb8ea04b6d" containerID="7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731" exitCode=0 Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.081358 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7733a48b-2bc4-4372-a222-37bb8ea04b6d","Type":"ContainerDied","Data":"7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731"} Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.081453 4778 scope.go:117] "RemoveContainer" containerID="7873b03bdc080777c3f95848a3cb2368217a2ebb6bed5cf0ae4dec3d3c66d731" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.081647 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.120385 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xht95\" (UniqueName: \"kubernetes.io/projected/7733a48b-2bc4-4372-a222-37bb8ea04b6d-kube-api-access-xht95\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.128667 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"f0341d80-4327-4c9e-bc11-0cddbc6eab66","Type":"ContainerStarted","Data":"f01be3f5a015795b5f40dbe57ae0105ae7b657a51492498554f4913e2d946654"} Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.128709 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"f0341d80-4327-4c9e-bc11-0cddbc6eab66","Type":"ContainerStarted","Data":"23b212b868d43a2b20b4b326a5d6e115ae1dce56fb344a9b0a15889b554a0452"} Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.143370 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7733a48b-2bc4-4372-a222-37bb8ea04b6d" (UID: "7733a48b-2bc4-4372-a222-37bb8ea04b6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.167961 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566c4d5fc-zx97x" event={"ID":"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1","Type":"ContainerStarted","Data":"580e7c05bfc29763be4beca9483310913855b264cda84343e89b99ee6458bee4"} Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.178606 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-config-data" (OuterVolumeSpecName: "config-data") pod "7733a48b-2bc4-4372-a222-37bb8ea04b6d" (UID: "7733a48b-2bc4-4372-a222-37bb8ea04b6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.186527 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" event={"ID":"a677b5ba-f5d3-4310-ab6d-af0505e82a00","Type":"ContainerStarted","Data":"048876f254d8481a39bc4ba587f25ae5e4007ace7976831d743f8095461c0872"} Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.191839 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-frvxx" event={"ID":"2e1d8894-7234-40d0-b42a-9d7ab1ce638a","Type":"ContainerStarted","Data":"eb476b810cfe28d9d73622ddd41bf8c8fc415530e6ad67a1faffa32c9bd043ba"} Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.191914 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.233131 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-769c65dfd5-frvxx" podStartSLOduration=4.233116418 podStartE2EDuration="4.233116418s" podCreationTimestamp="2026-03-12 13:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:28.2328198 +0000 UTC m=+1606.681515196" watchObservedRunningTime="2026-03-12 13:36:28.233116418 +0000 UTC m=+1606.681811814" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.233387 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.241680 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7733a48b-2bc4-4372-a222-37bb8ea04b6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.535462 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.562472 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.562587 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.562683 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.563868 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.563956 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" gracePeriod=600 Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.588555 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.606452 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.635261 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:36:28 crc kubenswrapper[4778]: E0312 13:36:28.635978 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-log" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.635992 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-log" Mar 12 13:36:28 crc kubenswrapper[4778]: E0312 13:36:28.636007 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7733a48b-2bc4-4372-a222-37bb8ea04b6d" containerName="nova-cell0-conductor-conductor" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.636014 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7733a48b-2bc4-4372-a222-37bb8ea04b6d" containerName="nova-cell0-conductor-conductor" Mar 12 13:36:28 crc kubenswrapper[4778]: E0312 13:36:28.636040 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-api" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.636047 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-api" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.636229 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-api" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.636253 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" containerName="nova-api-log" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.636384 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7733a48b-2bc4-4372-a222-37bb8ea04b6d" containerName="nova-cell0-conductor-conductor" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.637058 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.641095 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.651038 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.653770 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-config-data\") pod \"37d70066-6a42-4486-a487-e27b3ab3a61b\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.653864 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37d70066-6a42-4486-a487-e27b3ab3a61b-logs\") pod \"37d70066-6a42-4486-a487-e27b3ab3a61b\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.653905 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rctsf\" (UniqueName: \"kubernetes.io/projected/37d70066-6a42-4486-a487-e27b3ab3a61b-kube-api-access-rctsf\") pod \"37d70066-6a42-4486-a487-e27b3ab3a61b\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.653950 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-internal-tls-certs\") pod \"37d70066-6a42-4486-a487-e27b3ab3a61b\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.654034 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-combined-ca-bundle\") pod \"37d70066-6a42-4486-a487-e27b3ab3a61b\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.654127 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-public-tls-certs\") pod \"37d70066-6a42-4486-a487-e27b3ab3a61b\" (UID: \"37d70066-6a42-4486-a487-e27b3ab3a61b\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.672084 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d70066-6a42-4486-a487-e27b3ab3a61b-logs" (OuterVolumeSpecName: "logs") pod "37d70066-6a42-4486-a487-e27b3ab3a61b" (UID: "37d70066-6a42-4486-a487-e27b3ab3a61b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.682151 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d70066-6a42-4486-a487-e27b3ab3a61b-kube-api-access-rctsf" (OuterVolumeSpecName: "kube-api-access-rctsf") pod "37d70066-6a42-4486-a487-e27b3ab3a61b" (UID: "37d70066-6a42-4486-a487-e27b3ab3a61b"). InnerVolumeSpecName "kube-api-access-rctsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: E0312 13:36:28.707830 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:36:28 crc kubenswrapper[4778]: E0312 13:36:28.719913 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549 is running failed: container process not found" containerID="76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:36:28 crc kubenswrapper[4778]: E0312 13:36:28.720329 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549 is running failed: container process not found" containerID="76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:36:28 crc kubenswrapper[4778]: E0312 13:36:28.723785 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549 is running failed: container process not found" containerID="76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 12 13:36:28 crc kubenswrapper[4778]: E0312 13:36:28.723817 4778 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9c07bd9a-becb-4422-a881-5de27a8e8e56" containerName="nova-scheduler-scheduler" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.763438 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929bb450-949d-4f4f-9c21-de6c3fe32927-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.763594 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs2dn\" (UniqueName: \"kubernetes.io/projected/929bb450-949d-4f4f-9c21-de6c3fe32927-kube-api-access-zs2dn\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.763665 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929bb450-949d-4f4f-9c21-de6c3fe32927-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.763721 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37d70066-6a42-4486-a487-e27b3ab3a61b-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.763732 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rctsf\" (UniqueName: \"kubernetes.io/projected/37d70066-6a42-4486-a487-e27b3ab3a61b-kube-api-access-rctsf\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.793735 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.865866 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-config-data\") pod \"9c07bd9a-becb-4422-a881-5de27a8e8e56\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.866027 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-combined-ca-bundle\") pod \"9c07bd9a-becb-4422-a881-5de27a8e8e56\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.866173 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxrx6\" (UniqueName: \"kubernetes.io/projected/9c07bd9a-becb-4422-a881-5de27a8e8e56-kube-api-access-nxrx6\") pod \"9c07bd9a-becb-4422-a881-5de27a8e8e56\" (UID: \"9c07bd9a-becb-4422-a881-5de27a8e8e56\") " Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.866833 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929bb450-949d-4f4f-9c21-de6c3fe32927-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.866997 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs2dn\" (UniqueName: \"kubernetes.io/projected/929bb450-949d-4f4f-9c21-de6c3fe32927-kube-api-access-zs2dn\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.867078 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929bb450-949d-4f4f-9c21-de6c3fe32927-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.875926 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929bb450-949d-4f4f-9c21-de6c3fe32927-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.900062 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/929bb450-949d-4f4f-9c21-de6c3fe32927-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.901682 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs2dn\" (UniqueName: \"kubernetes.io/projected/929bb450-949d-4f4f-9c21-de6c3fe32927-kube-api-access-zs2dn\") pod \"nova-cell0-conductor-0\" (UID: \"929bb450-949d-4f4f-9c21-de6c3fe32927\") " pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.903618 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c07bd9a-becb-4422-a881-5de27a8e8e56-kube-api-access-nxrx6" (OuterVolumeSpecName: "kube-api-access-nxrx6") pod "9c07bd9a-becb-4422-a881-5de27a8e8e56" (UID: "9c07bd9a-becb-4422-a881-5de27a8e8e56"). InnerVolumeSpecName "kube-api-access-nxrx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.910014 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-config-data" (OuterVolumeSpecName: "config-data") pod "37d70066-6a42-4486-a487-e27b3ab3a61b" (UID: "37d70066-6a42-4486-a487-e27b3ab3a61b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.910131 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37d70066-6a42-4486-a487-e27b3ab3a61b" (UID: "37d70066-6a42-4486-a487-e27b3ab3a61b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.917816 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "37d70066-6a42-4486-a487-e27b3ab3a61b" (UID: "37d70066-6a42-4486-a487-e27b3ab3a61b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.918937 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-config-data" (OuterVolumeSpecName: "config-data") pod "9c07bd9a-becb-4422-a881-5de27a8e8e56" (UID: "9c07bd9a-becb-4422-a881-5de27a8e8e56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.920832 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "37d70066-6a42-4486-a487-e27b3ab3a61b" (UID: "37d70066-6a42-4486-a487-e27b3ab3a61b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.946583 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c07bd9a-becb-4422-a881-5de27a8e8e56" (UID: "9c07bd9a-becb-4422-a881-5de27a8e8e56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.970663 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.970712 4778 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.970727 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.970738 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.970749 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c07bd9a-becb-4422-a881-5de27a8e8e56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.970760 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxrx6\" (UniqueName: \"kubernetes.io/projected/9c07bd9a-becb-4422-a881-5de27a8e8e56-kube-api-access-nxrx6\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.970774 4778 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/37d70066-6a42-4486-a487-e27b3ab3a61b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:28 crc kubenswrapper[4778]: I0312 13:36:28.982976 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.204570 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"37d70066-6a42-4486-a487-e27b3ab3a61b","Type":"ContainerDied","Data":"c412c8241a59192093777ad55c60d09316d57b1f207c8116b8342fac0e609d85"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.204913 4778 scope.go:117] "RemoveContainer" containerID="6fb0ea63ecde6cfac6694eb778a0f0043874e52ed36561d1c373be870defe193" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.204836 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.218282 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"5a29f7b0-d851-4967-802b-91e301ce82f2","Type":"ContainerStarted","Data":"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.218353 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"5a29f7b0-d851-4967-802b-91e301ce82f2","Type":"ContainerStarted","Data":"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.222471 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1" event={"ID":"f0341d80-4327-4c9e-bc11-0cddbc6eab66","Type":"ContainerStarted","Data":"8c2261933274af98452e3406c6a1491fa7547083d2a42fd8ebde0430b36e07c4"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.225494 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566c4d5fc-zx97x" event={"ID":"8a67d4b7-d8eb-40f4-b51d-62e92c6042c1","Type":"ContainerStarted","Data":"0a03291c72a2f101c8185bc5efeea52f3870fb1b63ef8d6feb2ae00287399781"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.226053 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.228266 4778 generic.go:334] "Generic (PLEG): container finished" podID="9c07bd9a-becb-4422-a881-5de27a8e8e56" containerID="76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549" exitCode=0 Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.228337 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c07bd9a-becb-4422-a881-5de27a8e8e56","Type":"ContainerDied","Data":"76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.228361 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c07bd9a-becb-4422-a881-5de27a8e8e56","Type":"ContainerDied","Data":"7c66f4ab481a2ffe8aa1f978637b76237ffd5a0742d58abdd29d2a665dd400d8"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.228429 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.235084 4778 generic.go:334] "Generic (PLEG): container finished" podID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" containerID="0a63a4d47752d25e9f6c0d6aa9ed71121a4afe876250e6e10c1c1091bf2b8d8f" exitCode=0 Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.235352 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" event={"ID":"a677b5ba-f5d3-4310-ab6d-af0505e82a00","Type":"ContainerDied","Data":"0a63a4d47752d25e9f6c0d6aa9ed71121a4afe876250e6e10c1c1091bf2b8d8f"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.253392 4778 scope.go:117] "RemoveContainer" containerID="2b4aa266ad205b7e0a6d8899547a75fe40c64017eb43d076bced61bb7cc36c19" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.256285 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-frvxx" event={"ID":"2e1d8894-7234-40d0-b42a-9d7ab1ce638a","Type":"ContainerStarted","Data":"1e1f7bdc1b1f277f7f73a02d00233e5689179238b97d5a12c4ec486b0a81ef94"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.262921 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-metadata-0" podStartSLOduration=4.262895318 podStartE2EDuration="4.262895318s" podCreationTimestamp="2026-03-12 13:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:29.244433244 +0000 UTC m=+1607.693128660" watchObservedRunningTime="2026-03-12 13:36:29.262895318 +0000 UTC m=+1607.711590734" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.298730 4778 generic.go:334] "Generic (PLEG): container finished" podID="7be5cd74-51aa-4be4-bee0-bcd4414e988c" containerID="f7abc5ad094a49286dcbc9e9529cdeaa0cf757440a3faf4823fd72045e913f36" exitCode=0 Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.299278 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6796c46585-tk69s" event={"ID":"7be5cd74-51aa-4be4-bee0-bcd4414e988c","Type":"ContainerDied","Data":"f7abc5ad094a49286dcbc9e9529cdeaa0cf757440a3faf4823fd72045e913f36"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.336866 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.338458 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" exitCode=0 Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.338564 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e"} Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.339177 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:36:29 crc kubenswrapper[4778]: E0312 13:36:29.339504 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.355764 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.402154 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1" podStartSLOduration=4.402136247 podStartE2EDuration="4.402136247s" podCreationTimestamp="2026-03-12 13:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:29.322266142 +0000 UTC m=+1607.770961558" watchObservedRunningTime="2026-03-12 13:36:29.402136247 +0000 UTC m=+1607.850831643" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.402277 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: E0312 13:36:29.402905 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c07bd9a-becb-4422-a881-5de27a8e8e56" containerName="nova-scheduler-scheduler" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.402942 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c07bd9a-becb-4422-a881-5de27a8e8e56" containerName="nova-scheduler-scheduler" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.403251 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c07bd9a-becb-4422-a881-5de27a8e8e56" containerName="nova-scheduler-scheduler" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.404640 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.492956 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-566c4d5fc-zx97x" podStartSLOduration=5.492934343 podStartE2EDuration="5.492934343s" podCreationTimestamp="2026-03-12 13:36:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:29.353900979 +0000 UTC m=+1607.802596375" watchObservedRunningTime="2026-03-12 13:36:29.492934343 +0000 UTC m=+1607.941629729" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.495318 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.495960 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.496155 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np6gx\" (UniqueName: \"kubernetes.io/projected/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-kube-api-access-np6gx\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.497299 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-logs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.497483 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-config-data\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.498056 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-public-tls-certs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.504028 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.587453 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.601058 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-public-tls-certs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.601135 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.601173 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.601215 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np6gx\" (UniqueName: \"kubernetes.io/projected/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-kube-api-access-np6gx\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.601324 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-logs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.601371 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-config-data\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.608879 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-config-data\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.610484 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.615101 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.616135 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-logs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.620771 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.624036 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-public-tls-certs\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.624828 4778 scope.go:117] "RemoveContainer" containerID="76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.640216 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np6gx\" (UniqueName: \"kubernetes.io/projected/13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4-kube-api-access-np6gx\") pod \"nova-api-0\" (UID: \"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4\") " pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.640721 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.643056 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.646753 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.655509 4778 scope.go:117] "RemoveContainer" containerID="76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549" Mar 12 13:36:29 crc kubenswrapper[4778]: E0312 13:36:29.657110 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549\": container with ID starting with 76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549 not found: ID does not exist" containerID="76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.657147 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549"} err="failed to get container status \"76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549\": rpc error: code = NotFound desc = could not find container \"76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549\": container with ID starting with 76612676e8ec4eca2c187a9ab03eca1247f93e79ade8e00d7568f9fdf3aca549 not found: ID does not exist" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.657167 4778 scope.go:117] "RemoveContainer" containerID="572aad6c3b1a3f7c9ef45b8b4feb0d367e7e7916d0ab8dd064e2b8ac87268c51" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.703975 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f613745b-fe33-4918-9e0a-da2a59c55e33-config-data\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.704025 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f613745b-fe33-4918-9e0a-da2a59c55e33-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.704046 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnx9t\" (UniqueName: \"kubernetes.io/projected/f613745b-fe33-4918-9e0a-da2a59c55e33-kube-api-access-fnx9t\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.713127 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.767779 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.786094 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.806391 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f613745b-fe33-4918-9e0a-da2a59c55e33-config-data\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.806449 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f613745b-fe33-4918-9e0a-da2a59c55e33-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.806772 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnx9t\" (UniqueName: \"kubernetes.io/projected/f613745b-fe33-4918-9e0a-da2a59c55e33-kube-api-access-fnx9t\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.825758 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f613745b-fe33-4918-9e0a-da2a59c55e33-config-data\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.827147 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f613745b-fe33-4918-9e0a-da2a59c55e33-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.835302 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnx9t\" (UniqueName: \"kubernetes.io/projected/f613745b-fe33-4918-9e0a-da2a59c55e33-kube-api-access-fnx9t\") pod \"nova-scheduler-0\" (UID: \"f613745b-fe33-4918-9e0a-da2a59c55e33\") " pod="openstack/nova-scheduler-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.887301 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.953894 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:29 crc kubenswrapper[4778]: I0312 13:36:29.970206 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.025280 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-config\") pod \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.025376 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc9pp\" (UniqueName: \"kubernetes.io/projected/7be5cd74-51aa-4be4-bee0-bcd4414e988c-kube-api-access-mc9pp\") pod \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.025624 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-swift-storage-0\") pod \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.025807 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-sb\") pod \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.025912 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-nb\") pod \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.026071 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-svc\") pod \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\" (UID: \"7be5cd74-51aa-4be4-bee0-bcd4414e988c\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.045908 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be5cd74-51aa-4be4-bee0-bcd4414e988c-kube-api-access-mc9pp" (OuterVolumeSpecName: "kube-api-access-mc9pp") pod "7be5cd74-51aa-4be4-bee0-bcd4414e988c" (UID: "7be5cd74-51aa-4be4-bee0-bcd4414e988c"). InnerVolumeSpecName "kube-api-access-mc9pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.134596 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc9pp\" (UniqueName: \"kubernetes.io/projected/7be5cd74-51aa-4be4-bee0-bcd4414e988c-kube-api-access-mc9pp\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.232085 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-config" (OuterVolumeSpecName: "config") pod "7be5cd74-51aa-4be4-bee0-bcd4414e988c" (UID: "7be5cd74-51aa-4be4-bee0-bcd4414e988c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.237139 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.249351 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7be5cd74-51aa-4be4-bee0-bcd4414e988c" (UID: "7be5cd74-51aa-4be4-bee0-bcd4414e988c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.261053 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7be5cd74-51aa-4be4-bee0-bcd4414e988c" (UID: "7be5cd74-51aa-4be4-bee0-bcd4414e988c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.261092 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7be5cd74-51aa-4be4-bee0-bcd4414e988c" (UID: "7be5cd74-51aa-4be4-bee0-bcd4414e988c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.267157 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7be5cd74-51aa-4be4-bee0-bcd4414e988c" (UID: "7be5cd74-51aa-4be4-bee0-bcd4414e988c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.296111 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d70066-6a42-4486-a487-e27b3ab3a61b" path="/var/lib/kubelet/pods/37d70066-6a42-4486-a487-e27b3ab3a61b/volumes" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.297648 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7733a48b-2bc4-4372-a222-37bb8ea04b6d" path="/var/lib/kubelet/pods/7733a48b-2bc4-4372-a222-37bb8ea04b6d/volumes" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.298371 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c07bd9a-becb-4422-a881-5de27a8e8e56" path="/var/lib/kubelet/pods/9c07bd9a-becb-4422-a881-5de27a8e8e56/volumes" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.344209 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.344248 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.344262 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.344273 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7be5cd74-51aa-4be4-bee0-bcd4414e988c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.365585 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.375699 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" event={"ID":"a677b5ba-f5d3-4310-ab6d-af0505e82a00","Type":"ContainerStarted","Data":"80b9a94e51ace133a39bb4f360454c37e2be50602309d428d0792de3b24d2efc"} Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.377374 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.380017 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"929bb450-949d-4f4f-9c21-de6c3fe32927","Type":"ContainerStarted","Data":"52042b0769bcccf846e369aa04436e8cee2985c96f965a205aee13cc5d76845b"} Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.380051 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"929bb450-949d-4f4f-9c21-de6c3fe32927","Type":"ContainerStarted","Data":"a745c0714a3158f1d8c67ba0596d6c8fcebec865205de5a0ab80dcef2a2db29a"} Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.380990 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.385377 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6796c46585-tk69s" event={"ID":"7be5cd74-51aa-4be4-bee0-bcd4414e988c","Type":"ContainerDied","Data":"9611fdd9f50898df9bc39508d9d61ff6e00eab70329522bacb24a65cad5f58f2"} Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.385430 4778 scope.go:117] "RemoveContainer" containerID="f7abc5ad094a49286dcbc9e9529cdeaa0cf757440a3faf4823fd72045e913f36" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.385517 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6796c46585-tk69s" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.401469 4778 generic.go:334] "Generic (PLEG): container finished" podID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerID="1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed" exitCode=0 Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.401831 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f2bac2-0571-44d8-ba4e-c006600506a5","Type":"ContainerDied","Data":"1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed"} Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.401883 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e5f2bac2-0571-44d8-ba4e-c006600506a5","Type":"ContainerDied","Data":"63ddaf1ae0c88152b73f19a7bbf611a71857e4349ca72eb4c73d9d4e815e1b3c"} Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.401976 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.416233 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" podStartSLOduration=4.416214803 podStartE2EDuration="4.416214803s" podCreationTimestamp="2026-03-12 13:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:30.407606679 +0000 UTC m=+1608.856302075" watchObservedRunningTime="2026-03-12 13:36:30.416214803 +0000 UTC m=+1608.864910199" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.446814 4778 scope.go:117] "RemoveContainer" containerID="1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.447969 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-config-data\") pod \"e5f2bac2-0571-44d8-ba4e-c006600506a5\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.448016 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5dsf\" (UniqueName: \"kubernetes.io/projected/e5f2bac2-0571-44d8-ba4e-c006600506a5-kube-api-access-d5dsf\") pod \"e5f2bac2-0571-44d8-ba4e-c006600506a5\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.448175 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f2bac2-0571-44d8-ba4e-c006600506a5-logs\") pod \"e5f2bac2-0571-44d8-ba4e-c006600506a5\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.448221 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-combined-ca-bundle\") pod \"e5f2bac2-0571-44d8-ba4e-c006600506a5\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.448307 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-nova-metadata-tls-certs\") pod \"e5f2bac2-0571-44d8-ba4e-c006600506a5\" (UID: \"e5f2bac2-0571-44d8-ba4e-c006600506a5\") " Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.448927 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f2bac2-0571-44d8-ba4e-c006600506a5-logs" (OuterVolumeSpecName: "logs") pod "e5f2bac2-0571-44d8-ba4e-c006600506a5" (UID: "e5f2bac2-0571-44d8-ba4e-c006600506a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.462018 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f2bac2-0571-44d8-ba4e-c006600506a5-kube-api-access-d5dsf" (OuterVolumeSpecName: "kube-api-access-d5dsf") pod "e5f2bac2-0571-44d8-ba4e-c006600506a5" (UID: "e5f2bac2-0571-44d8-ba4e-c006600506a5"). InnerVolumeSpecName "kube-api-access-d5dsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.483251 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.483223214 podStartE2EDuration="2.483223214s" podCreationTimestamp="2026-03-12 13:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:30.449404734 +0000 UTC m=+1608.898100130" watchObservedRunningTime="2026-03-12 13:36:30.483223214 +0000 UTC m=+1608.931918620" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.516605 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6796c46585-tk69s"] Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.518899 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-config-data" (OuterVolumeSpecName: "config-data") pod "e5f2bac2-0571-44d8-ba4e-c006600506a5" (UID: "e5f2bac2-0571-44d8-ba4e-c006600506a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.527311 4778 scope.go:117] "RemoveContainer" containerID="3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.533845 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6796c46585-tk69s"] Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.538441 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5f2bac2-0571-44d8-ba4e-c006600506a5" (UID: "e5f2bac2-0571-44d8-ba4e-c006600506a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.553731 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5f2bac2-0571-44d8-ba4e-c006600506a5-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.553762 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.553773 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.553783 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5dsf\" (UniqueName: \"kubernetes.io/projected/e5f2bac2-0571-44d8-ba4e-c006600506a5-kube-api-access-d5dsf\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.568352 4778 scope.go:117] "RemoveContainer" containerID="1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.568629 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e5f2bac2-0571-44d8-ba4e-c006600506a5" (UID: "e5f2bac2-0571-44d8-ba4e-c006600506a5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:30 crc kubenswrapper[4778]: E0312 13:36:30.569744 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed\": container with ID starting with 1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed not found: ID does not exist" containerID="1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.569793 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed"} err="failed to get container status \"1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed\": rpc error: code = NotFound desc = could not find container \"1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed\": container with ID starting with 1938dab200355bd40396968513a110679aaccecc8babb8b2a8c4c460989f58ed not found: ID does not exist" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.569824 4778 scope.go:117] "RemoveContainer" containerID="3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c" Mar 12 13:36:30 crc kubenswrapper[4778]: E0312 13:36:30.570643 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c\": container with ID starting with 3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c not found: ID does not exist" containerID="3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.570682 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c"} err="failed to get container status \"3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c\": rpc error: code = NotFound desc = could not find container \"3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c\": container with ID starting with 3f33e9e86bb02fc44b869bbdad27f6457624a82e4496ad8d8db76de0c3d1fb4c not found: ID does not exist" Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.620135 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 12 13:36:30 crc kubenswrapper[4778]: I0312 13:36:30.657604 4778 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5f2bac2-0571-44d8-ba4e-c006600506a5-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:31 crc kubenswrapper[4778]: E0312 13:36:31.114790 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17f6ecc58bfeead13bd408fa3389fcd5b9ea0127020d364f507d2277de0d4c6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 12 13:36:31 crc kubenswrapper[4778]: E0312 13:36:31.128323 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17f6ecc58bfeead13bd408fa3389fcd5b9ea0127020d364f507d2277de0d4c6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 12 13:36:31 crc kubenswrapper[4778]: E0312 13:36:31.132343 4778 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17f6ecc58bfeead13bd408fa3389fcd5b9ea0127020d364f507d2277de0d4c6f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 12 13:36:31 crc kubenswrapper[4778]: E0312 13:36:31.132419 4778 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="e28e8bc2-4b60-447e-b78e-99f53f0559e9" containerName="nova-cell1-conductor-conductor" Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.184114 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 12 13:36:31 crc kubenswrapper[4778]: W0312 13:36:31.186321 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13b8e1df_5a8c_44de_b8e8_6c7efdb8bad4.slice/crio-be7e1bcb037e22cbdbae44d248064a96e8822fc40563d6af814e4980aca9133c WatchSource:0}: Error finding container be7e1bcb037e22cbdbae44d248064a96e8822fc40563d6af814e4980aca9133c: Status 404 returned error can't find the container with id be7e1bcb037e22cbdbae44d248064a96e8822fc40563d6af814e4980aca9133c Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.364155 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.374668 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.442499 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f613745b-fe33-4918-9e0a-da2a59c55e33","Type":"ContainerStarted","Data":"8a09c7c89f2aff34ffefc779797752345605e724b52f5e03391e22b5822ed81a"} Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.443115 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f613745b-fe33-4918-9e0a-da2a59c55e33","Type":"ContainerStarted","Data":"f9fc1f2571ef8dc3431c1ecf1910fa61e4e09394d37fc13345a23eda1f9922e1"} Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.456645 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4","Type":"ContainerStarted","Data":"aa09b24193825e4751e820dec9702bffb5238df9786e2014a8c5502a2a867ae7"} Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.456719 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4","Type":"ContainerStarted","Data":"be7e1bcb037e22cbdbae44d248064a96e8822fc40563d6af814e4980aca9133c"} Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.465137 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-metadata-0" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerName="nova-cell1-metadata-log" containerID="cri-o://417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0" gracePeriod=30 Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.467644 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-metadata-0" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerName="nova-cell1-metadata-metadata" containerID="cri-o://dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961" gracePeriod=30 Mar 12 13:36:31 crc kubenswrapper[4778]: I0312 13:36:31.551904 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.551871677 podStartE2EDuration="2.551871677s" podCreationTimestamp="2026-03-12 13:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:31.461341479 +0000 UTC m=+1609.910036885" watchObservedRunningTime="2026-03-12 13:36:31.551871677 +0000 UTC m=+1610.000567073" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.042919 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.149016 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4cqk\" (UniqueName: \"kubernetes.io/projected/5a29f7b0-d851-4967-802b-91e301ce82f2-kube-api-access-m4cqk\") pod \"5a29f7b0-d851-4967-802b-91e301ce82f2\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.149356 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a29f7b0-d851-4967-802b-91e301ce82f2-logs\") pod \"5a29f7b0-d851-4967-802b-91e301ce82f2\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.149504 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-combined-ca-bundle\") pod \"5a29f7b0-d851-4967-802b-91e301ce82f2\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.149616 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-config-data\") pod \"5a29f7b0-d851-4967-802b-91e301ce82f2\" (UID: \"5a29f7b0-d851-4967-802b-91e301ce82f2\") " Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.149983 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a29f7b0-d851-4967-802b-91e301ce82f2-logs" (OuterVolumeSpecName: "logs") pod "5a29f7b0-d851-4967-802b-91e301ce82f2" (UID: "5a29f7b0-d851-4967-802b-91e301ce82f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.150380 4778 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a29f7b0-d851-4967-802b-91e301ce82f2-logs\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.157419 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a29f7b0-d851-4967-802b-91e301ce82f2-kube-api-access-m4cqk" (OuterVolumeSpecName: "kube-api-access-m4cqk") pod "5a29f7b0-d851-4967-802b-91e301ce82f2" (UID: "5a29f7b0-d851-4967-802b-91e301ce82f2"). InnerVolumeSpecName "kube-api-access-m4cqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.183385 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a29f7b0-d851-4967-802b-91e301ce82f2" (UID: "5a29f7b0-d851-4967-802b-91e301ce82f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.187400 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-config-data" (OuterVolumeSpecName: "config-data") pod "5a29f7b0-d851-4967-802b-91e301ce82f2" (UID: "5a29f7b0-d851-4967-802b-91e301ce82f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.253455 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.253881 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a29f7b0-d851-4967-802b-91e301ce82f2-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.253898 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4cqk\" (UniqueName: \"kubernetes.io/projected/5a29f7b0-d851-4967-802b-91e301ce82f2-kube-api-access-m4cqk\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.277175 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be5cd74-51aa-4be4-bee0-bcd4414e988c" path="/var/lib/kubelet/pods/7be5cd74-51aa-4be4-bee0-bcd4414e988c/volumes" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.278277 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" path="/var/lib/kubelet/pods/e5f2bac2-0571-44d8-ba4e-c006600506a5/volumes" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.480902 4778 generic.go:334] "Generic (PLEG): container finished" podID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerID="dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961" exitCode=0 Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.481266 4778 generic.go:334] "Generic (PLEG): container finished" podID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerID="417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0" exitCode=143 Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.481128 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"5a29f7b0-d851-4967-802b-91e301ce82f2","Type":"ContainerDied","Data":"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961"} Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.481234 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.481376 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"5a29f7b0-d851-4967-802b-91e301ce82f2","Type":"ContainerDied","Data":"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0"} Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.481405 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"5a29f7b0-d851-4967-802b-91e301ce82f2","Type":"ContainerDied","Data":"7686b13a63fb7303c82944b69b0f74a27bb498a86b8a8900db3aa379dab6a697"} Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.481426 4778 scope.go:117] "RemoveContainer" containerID="dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.484584 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4","Type":"ContainerStarted","Data":"a29462829c4d1d056fbfecca3676d344fa30407641f877d1b84a8a5c398c1120"} Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.509722 4778 scope.go:117] "RemoveContainer" containerID="417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.524385 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.524358582 podStartE2EDuration="3.524358582s" podCreationTimestamp="2026-03-12 13:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:32.514655236 +0000 UTC m=+1610.963350632" watchObservedRunningTime="2026-03-12 13:36:32.524358582 +0000 UTC m=+1610.973053988" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.544987 4778 scope.go:117] "RemoveContainer" containerID="dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961" Mar 12 13:36:32 crc kubenswrapper[4778]: E0312 13:36:32.545545 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961\": container with ID starting with dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961 not found: ID does not exist" containerID="dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.545589 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961"} err="failed to get container status \"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961\": rpc error: code = NotFound desc = could not find container \"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961\": container with ID starting with dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961 not found: ID does not exist" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.545621 4778 scope.go:117] "RemoveContainer" containerID="417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0" Mar 12 13:36:32 crc kubenswrapper[4778]: E0312 13:36:32.545864 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0\": container with ID starting with 417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0 not found: ID does not exist" containerID="417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.545894 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0"} err="failed to get container status \"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0\": rpc error: code = NotFound desc = could not find container \"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0\": container with ID starting with 417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0 not found: ID does not exist" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.545918 4778 scope.go:117] "RemoveContainer" containerID="dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.546151 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961"} err="failed to get container status \"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961\": rpc error: code = NotFound desc = could not find container \"dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961\": container with ID starting with dd6bb746e6fc0c601d6ae2bcf58c264b68fb62b2faf56722bc914a19843b5961 not found: ID does not exist" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.546191 4778 scope.go:117] "RemoveContainer" containerID="417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.546443 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0"} err="failed to get container status \"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0\": rpc error: code = NotFound desc = could not find container \"417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0\": container with ID starting with 417f17c877a12302e92668f628e55146a1d2400c3af2e99761c6921a618abcb0 not found: ID does not exist" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.555259 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.576442 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.594700 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:32 crc kubenswrapper[4778]: E0312 13:36:32.595388 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerName="nova-cell1-metadata-log" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595410 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerName="nova-cell1-metadata-log" Mar 12 13:36:32 crc kubenswrapper[4778]: E0312 13:36:32.595435 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-metadata" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595441 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-metadata" Mar 12 13:36:32 crc kubenswrapper[4778]: E0312 13:36:32.595466 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerName="nova-cell1-metadata-metadata" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595476 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerName="nova-cell1-metadata-metadata" Mar 12 13:36:32 crc kubenswrapper[4778]: E0312 13:36:32.595489 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be5cd74-51aa-4be4-bee0-bcd4414e988c" containerName="init" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595495 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be5cd74-51aa-4be4-bee0-bcd4414e988c" containerName="init" Mar 12 13:36:32 crc kubenswrapper[4778]: E0312 13:36:32.595515 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-log" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595521 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-log" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595739 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerName="nova-cell1-metadata-metadata" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595758 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-metadata" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595765 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f2bac2-0571-44d8-ba4e-c006600506a5" containerName="nova-metadata-log" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595775 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be5cd74-51aa-4be4-bee0-bcd4414e988c" containerName="init" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.595789 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" containerName="nova-cell1-metadata-log" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.597243 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.601885 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-cell1-internal-svc" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.602176 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-metadata-config-data" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.606477 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.668834 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-config-data\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.668893 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95k8\" (UniqueName: \"kubernetes.io/projected/c289a520-78eb-433f-b7a4-0c03be917c18-kube-api-access-z95k8\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.668963 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-nova-metadata-tls-certs\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.669014 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c289a520-78eb-433f-b7a4-0c03be917c18-logs\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.669073 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-combined-ca-bundle\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.772258 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-config-data\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.772325 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z95k8\" (UniqueName: \"kubernetes.io/projected/c289a520-78eb-433f-b7a4-0c03be917c18-kube-api-access-z95k8\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.772375 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-nova-metadata-tls-certs\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.772430 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c289a520-78eb-433f-b7a4-0c03be917c18-logs\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.772469 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-combined-ca-bundle\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.773605 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c289a520-78eb-433f-b7a4-0c03be917c18-logs\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.780825 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-nova-metadata-tls-certs\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.780857 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-combined-ca-bundle\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.781149 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c289a520-78eb-433f-b7a4-0c03be917c18-config-data\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.796715 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z95k8\" (UniqueName: \"kubernetes.io/projected/c289a520-78eb-433f-b7a4-0c03be917c18-kube-api-access-z95k8\") pod \"nova-cell1-metadata-0\" (UID: \"c289a520-78eb-433f-b7a4-0c03be917c18\") " pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:32 crc kubenswrapper[4778]: I0312 13:36:32.916295 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:33 crc kubenswrapper[4778]: I0312 13:36:33.496560 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-metadata-0"] Mar 12 13:36:33 crc kubenswrapper[4778]: I0312 13:36:33.504021 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"c289a520-78eb-433f-b7a4-0c03be917c18","Type":"ContainerStarted","Data":"f2aa1bb4f48d2cacad481df82ac182e97844b78757aa2641bc74a3d67fa66465"} Mar 12 13:36:34 crc kubenswrapper[4778]: I0312 13:36:34.296354 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a29f7b0-d851-4967-802b-91e301ce82f2" path="/var/lib/kubelet/pods/5a29f7b0-d851-4967-802b-91e301ce82f2/volumes" Mar 12 13:36:34 crc kubenswrapper[4778]: I0312 13:36:34.523564 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"c289a520-78eb-433f-b7a4-0c03be917c18","Type":"ContainerStarted","Data":"2cf98c7e55bb9691ad7fd575db56e4f9545613bf3af1b6fb6fc96b703ff2b197"} Mar 12 13:36:34 crc kubenswrapper[4778]: I0312 13:36:34.523618 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-metadata-0" event={"ID":"c289a520-78eb-433f-b7a4-0c03be917c18","Type":"ContainerStarted","Data":"9b59dda90b6bbb6f76cb19e3620376e472a849789bd2baf7523bd43526b2a527"} Mar 12 13:36:34 crc kubenswrapper[4778]: I0312 13:36:34.971260 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 12 13:36:35 crc kubenswrapper[4778]: I0312 13:36:35.541888 4778 generic.go:334] "Generic (PLEG): container finished" podID="e28e8bc2-4b60-447e-b78e-99f53f0559e9" containerID="17f6ecc58bfeead13bd408fa3389fcd5b9ea0127020d364f507d2277de0d4c6f" exitCode=0 Mar 12 13:36:35 crc kubenswrapper[4778]: I0312 13:36:35.543086 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e28e8bc2-4b60-447e-b78e-99f53f0559e9","Type":"ContainerDied","Data":"17f6ecc58bfeead13bd408fa3389fcd5b9ea0127020d364f507d2277de0d4c6f"} Mar 12 13:36:35 crc kubenswrapper[4778]: I0312 13:36:35.999398 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.021618 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-metadata-0" podStartSLOduration=4.021595754 podStartE2EDuration="4.021595754s" podCreationTimestamp="2026-03-12 13:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:34.551683009 +0000 UTC m=+1613.000378405" watchObservedRunningTime="2026-03-12 13:36:36.021595754 +0000 UTC m=+1614.470291140" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.052062 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srnlz\" (UniqueName: \"kubernetes.io/projected/e28e8bc2-4b60-447e-b78e-99f53f0559e9-kube-api-access-srnlz\") pod \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.052139 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-combined-ca-bundle\") pod \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.053239 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-config-data\") pod \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\" (UID: \"e28e8bc2-4b60-447e-b78e-99f53f0559e9\") " Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.078487 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28e8bc2-4b60-447e-b78e-99f53f0559e9-kube-api-access-srnlz" (OuterVolumeSpecName: "kube-api-access-srnlz") pod "e28e8bc2-4b60-447e-b78e-99f53f0559e9" (UID: "e28e8bc2-4b60-447e-b78e-99f53f0559e9"). InnerVolumeSpecName "kube-api-access-srnlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.097526 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e28e8bc2-4b60-447e-b78e-99f53f0559e9" (UID: "e28e8bc2-4b60-447e-b78e-99f53f0559e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.104565 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-config-data" (OuterVolumeSpecName: "config-data") pod "e28e8bc2-4b60-447e-b78e-99f53f0559e9" (UID: "e28e8bc2-4b60-447e-b78e-99f53f0559e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.156312 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srnlz\" (UniqueName: \"kubernetes.io/projected/e28e8bc2-4b60-447e-b78e-99f53f0559e9-kube-api-access-srnlz\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.156354 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.156365 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e28e8bc2-4b60-447e-b78e-99f53f0559e9-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.248500 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-1" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.248558 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-1" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.556454 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e28e8bc2-4b60-447e-b78e-99f53f0559e9","Type":"ContainerDied","Data":"0b4aff5eb3ee6cc75fbeaaa57c05dff4153b4e03f714593a99c2f4d9aa7da572"} Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.556963 4778 scope.go:117] "RemoveContainer" containerID="17f6ecc58bfeead13bd408fa3389fcd5b9ea0127020d364f507d2277de0d4c6f" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.556783 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.592301 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.608150 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.632456 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:36:36 crc kubenswrapper[4778]: E0312 13:36:36.633250 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28e8bc2-4b60-447e-b78e-99f53f0559e9" containerName="nova-cell1-conductor-conductor" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.633271 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28e8bc2-4b60-447e-b78e-99f53f0559e9" containerName="nova-cell1-conductor-conductor" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.633585 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28e8bc2-4b60-447e-b78e-99f53f0559e9" containerName="nova-cell1-conductor-conductor" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.634724 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.637864 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.651600 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.666379 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1466aea3-fa10-49a6-a254-a96a52091aca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.667221 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1466aea3-fa10-49a6-a254-a96a52091aca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.667426 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vpwx\" (UniqueName: \"kubernetes.io/projected/1466aea3-fa10-49a6-a254-a96a52091aca-kube-api-access-9vpwx\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.767583 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.769640 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1466aea3-fa10-49a6-a254-a96a52091aca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.769733 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vpwx\" (UniqueName: \"kubernetes.io/projected/1466aea3-fa10-49a6-a254-a96a52091aca-kube-api-access-9vpwx\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.769814 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1466aea3-fa10-49a6-a254-a96a52091aca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.775689 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1466aea3-fa10-49a6-a254-a96a52091aca-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.783126 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1466aea3-fa10-49a6-a254-a96a52091aca-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.800660 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vpwx\" (UniqueName: \"kubernetes.io/projected/1466aea3-fa10-49a6-a254-a96a52091aca-kube-api-access-9vpwx\") pod \"nova-cell1-conductor-0\" (UID: \"1466aea3-fa10-49a6-a254-a96a52091aca\") " pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.869668 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vbzn5"] Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.869913 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" podUID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerName="dnsmasq-dns" containerID="cri-o://9226d052c31f98b5c3da17ce19bbc81e718b949c212eab5fa79f7c540fdf830a" gracePeriod=10 Mar 12 13:36:36 crc kubenswrapper[4778]: I0312 13:36:36.960479 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:37 crc kubenswrapper[4778]: I0312 13:36:37.261357 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-1" podUID="f0341d80-4327-4c9e-bc11-0cddbc6eab66" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.216:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:36:37 crc kubenswrapper[4778]: I0312 13:36:37.262311 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-1" podUID="f0341d80-4327-4c9e-bc11-0cddbc6eab66" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.216:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:36:37 crc kubenswrapper[4778]: I0312 13:36:37.639028 4778 generic.go:334] "Generic (PLEG): container finished" podID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerID="9226d052c31f98b5c3da17ce19bbc81e718b949c212eab5fa79f7c540fdf830a" exitCode=0 Mar 12 13:36:37 crc kubenswrapper[4778]: I0312 13:36:37.639203 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" event={"ID":"d621990b-b3fb-457c-a7b8-0726fa89a5e6","Type":"ContainerDied","Data":"9226d052c31f98b5c3da17ce19bbc81e718b949c212eab5fa79f7c540fdf830a"} Mar 12 13:36:37 crc kubenswrapper[4778]: I0312 13:36:37.996619 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.108899 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.133577 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-svc\") pod \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.133744 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-config\") pod \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.133867 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-swift-storage-0\") pod \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.133898 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5ccj\" (UniqueName: \"kubernetes.io/projected/d621990b-b3fb-457c-a7b8-0726fa89a5e6-kube-api-access-b5ccj\") pod \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.134179 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-nb\") pod \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.134241 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-sb\") pod \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\" (UID: \"d621990b-b3fb-457c-a7b8-0726fa89a5e6\") " Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.176323 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d621990b-b3fb-457c-a7b8-0726fa89a5e6-kube-api-access-b5ccj" (OuterVolumeSpecName: "kube-api-access-b5ccj") pod "d621990b-b3fb-457c-a7b8-0726fa89a5e6" (UID: "d621990b-b3fb-457c-a7b8-0726fa89a5e6"). InnerVolumeSpecName "kube-api-access-b5ccj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.223148 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d621990b-b3fb-457c-a7b8-0726fa89a5e6" (UID: "d621990b-b3fb-457c-a7b8-0726fa89a5e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.238057 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d621990b-b3fb-457c-a7b8-0726fa89a5e6" (UID: "d621990b-b3fb-457c-a7b8-0726fa89a5e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.238650 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5ccj\" (UniqueName: \"kubernetes.io/projected/d621990b-b3fb-457c-a7b8-0726fa89a5e6-kube-api-access-b5ccj\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.238693 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.238707 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.245174 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d621990b-b3fb-457c-a7b8-0726fa89a5e6" (UID: "d621990b-b3fb-457c-a7b8-0726fa89a5e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.267626 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-config" (OuterVolumeSpecName: "config") pod "d621990b-b3fb-457c-a7b8-0726fa89a5e6" (UID: "d621990b-b3fb-457c-a7b8-0726fa89a5e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.279467 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28e8bc2-4b60-447e-b78e-99f53f0559e9" path="/var/lib/kubelet/pods/e28e8bc2-4b60-447e-b78e-99f53f0559e9/volumes" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.280059 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d621990b-b3fb-457c-a7b8-0726fa89a5e6" (UID: "d621990b-b3fb-457c-a7b8-0726fa89a5e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.346480 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.346523 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.346666 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d621990b-b3fb-457c-a7b8-0726fa89a5e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.655009 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" event={"ID":"d621990b-b3fb-457c-a7b8-0726fa89a5e6","Type":"ContainerDied","Data":"7c046518ad4ee249311d20eb84f556ea55869944e1e9d121bc2b448648522cec"} Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.655070 4778 scope.go:117] "RemoveContainer" containerID="9226d052c31f98b5c3da17ce19bbc81e718b949c212eab5fa79f7c540fdf830a" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.655275 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.658397 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1466aea3-fa10-49a6-a254-a96a52091aca","Type":"ContainerStarted","Data":"70d875f5a4aa3f21adec3e4eb646a10acf69a7927b64f60ffab4c71759da535a"} Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.658473 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1466aea3-fa10-49a6-a254-a96a52091aca","Type":"ContainerStarted","Data":"1f27b5dc0babf6d3d07bf35ec2c31830dbbb5b81af3caf00127f2f0c76025e27"} Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.660002 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.693756 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vbzn5"] Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.694922 4778 scope.go:117] "RemoveContainer" containerID="f768634e6581a58404932d5b274b7e499ff8a446926b77d44c652d5c4c0bad66" Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.715767 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-vbzn5"] Mar 12 13:36:38 crc kubenswrapper[4778]: I0312 13:36:38.720086 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.720060059 podStartE2EDuration="2.720060059s" podCreationTimestamp="2026-03-12 13:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:38.703323844 +0000 UTC m=+1617.152019230" watchObservedRunningTime="2026-03-12 13:36:38.720060059 +0000 UTC m=+1617.168755455" Mar 12 13:36:39 crc kubenswrapper[4778]: I0312 13:36:39.213884 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 12 13:36:39 crc kubenswrapper[4778]: I0312 13:36:39.888160 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 13:36:39 crc kubenswrapper[4778]: I0312 13:36:39.889568 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 12 13:36:40 crc kubenswrapper[4778]: I0312 13:36:40.273256 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 12 13:36:40 crc kubenswrapper[4778]: I0312 13:36:40.351970 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" path="/var/lib/kubelet/pods/d621990b-b3fb-457c-a7b8-0726fa89a5e6/volumes" Mar 12 13:36:40 crc kubenswrapper[4778]: I0312 13:36:40.367749 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 12 13:36:41 crc kubenswrapper[4778]: I0312 13:36:41.041525 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:36:41 crc kubenswrapper[4778]: I0312 13:36:41.041819 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:36:41 crc kubenswrapper[4778]: I0312 13:36:41.090002 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 12 13:36:42 crc kubenswrapper[4778]: I0312 13:36:42.263888 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:36:42 crc kubenswrapper[4778]: E0312 13:36:42.264860 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:36:42 crc kubenswrapper[4778]: I0312 13:36:42.694135 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-89c5cd4d5-vbzn5" podUID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.204:5353: i/o timeout" Mar 12 13:36:42 crc kubenswrapper[4778]: I0312 13:36:42.917780 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:42 crc kubenswrapper[4778]: I0312 13:36:42.917831 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:43 crc kubenswrapper[4778]: I0312 13:36:43.936422 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-cell1-metadata-0" podUID="c289a520-78eb-433f-b7a4-0c03be917c18" containerName="nova-cell1-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:36:43 crc kubenswrapper[4778]: I0312 13:36:43.936484 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-cell1-metadata-0" podUID="c289a520-78eb-433f-b7a4-0c03be917c18" containerName="nova-cell1-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 12 13:36:44 crc kubenswrapper[4778]: I0312 13:36:44.248309 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-1" Mar 12 13:36:44 crc kubenswrapper[4778]: I0312 13:36:44.248607 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-1" Mar 12 13:36:46 crc kubenswrapper[4778]: I0312 13:36:46.267366 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-1" Mar 12 13:36:46 crc kubenswrapper[4778]: I0312 13:36:46.267481 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-1" Mar 12 13:36:46 crc kubenswrapper[4778]: I0312 13:36:46.275831 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-1" Mar 12 13:36:46 crc kubenswrapper[4778]: I0312 13:36:46.277234 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-1" Mar 12 13:36:47 crc kubenswrapper[4778]: I0312 13:36:47.010016 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 12 13:36:47 crc kubenswrapper[4778]: I0312 13:36:47.888607 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 13:36:47 crc kubenswrapper[4778]: I0312 13:36:47.888903 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 12 13:36:49 crc kubenswrapper[4778]: I0312 13:36:49.895691 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 13:36:49 crc kubenswrapper[4778]: I0312 13:36:49.898408 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 12 13:36:49 crc kubenswrapper[4778]: I0312 13:36:49.913775 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 13:36:50 crc kubenswrapper[4778]: I0312 13:36:50.479647 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 12 13:36:50 crc kubenswrapper[4778]: I0312 13:36:50.917361 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:50 crc kubenswrapper[4778]: I0312 13:36:50.917676 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:52 crc kubenswrapper[4778]: I0312 13:36:52.923494 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:52 crc kubenswrapper[4778]: I0312 13:36:52.924209 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:52 crc kubenswrapper[4778]: I0312 13:36:52.928206 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:53 crc kubenswrapper[4778]: I0312 13:36:53.254338 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:36:53 crc kubenswrapper[4778]: E0312 13:36:53.254547 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:36:53 crc kubenswrapper[4778]: I0312 13:36:53.497409 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-metadata-0" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.612551 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.616501 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-566c4d5fc-zx97x" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.779935 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-769c65dfd5-frvxx"] Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.863233 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-566c4d5fc-dggmh"] Mar 12 13:36:54 crc kubenswrapper[4778]: E0312 13:36:54.863646 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerName="init" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.863662 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerName="init" Mar 12 13:36:54 crc kubenswrapper[4778]: E0312 13:36:54.863679 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerName="dnsmasq-dns" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.863686 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerName="dnsmasq-dns" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.863880 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d621990b-b3fb-457c-a7b8-0726fa89a5e6" containerName="dnsmasq-dns" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.864843 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.874208 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-566c4d5fc-dggmh"] Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.937737 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-public-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.938229 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kpcb\" (UniqueName: \"kubernetes.io/projected/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-kube-api-access-8kpcb\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.938271 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-ovndb-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.938402 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-config\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.938518 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-httpd-config\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.938607 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-combined-ca-bundle\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:54 crc kubenswrapper[4778]: I0312 13:36:54.938826 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-internal-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.040630 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-public-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.040694 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kpcb\" (UniqueName: \"kubernetes.io/projected/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-kube-api-access-8kpcb\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.040725 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-ovndb-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.040786 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-config\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.040856 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-httpd-config\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.040906 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-combined-ca-bundle\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.040936 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-internal-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.056473 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-combined-ca-bundle\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.057021 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-config\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.060462 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-httpd-config\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.060745 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-ovndb-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.061537 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-internal-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.064025 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kpcb\" (UniqueName: \"kubernetes.io/projected/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-kube-api-access-8kpcb\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.064743 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7596a69e-33c9-4a2b-89fc-e4c41252b3fd-public-tls-certs\") pod \"neutron-566c4d5fc-dggmh\" (UID: \"7596a69e-33c9-4a2b-89fc-e4c41252b3fd\") " pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.382865 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.543478 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-769c65dfd5-frvxx" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerName="neutron-api" containerID="cri-o://eb476b810cfe28d9d73622ddd41bf8c8fc415530e6ad67a1faffa32c9bd043ba" gracePeriod=30 Mar 12 13:36:55 crc kubenswrapper[4778]: I0312 13:36:55.544872 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-769c65dfd5-frvxx" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerName="neutron-httpd" containerID="cri-o://1e1f7bdc1b1f277f7f73a02d00233e5689179238b97d5a12c4ec486b0a81ef94" gracePeriod=30 Mar 12 13:36:56 crc kubenswrapper[4778]: I0312 13:36:56.093484 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-566c4d5fc-dggmh"] Mar 12 13:36:56 crc kubenswrapper[4778]: W0312 13:36:56.096057 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7596a69e_33c9_4a2b_89fc_e4c41252b3fd.slice/crio-2eb6ace63a707a07b12839eb74da3ba12196b862b500909424af3b42430d3b71 WatchSource:0}: Error finding container 2eb6ace63a707a07b12839eb74da3ba12196b862b500909424af3b42430d3b71: Status 404 returned error can't find the container with id 2eb6ace63a707a07b12839eb74da3ba12196b862b500909424af3b42430d3b71 Mar 12 13:36:56 crc kubenswrapper[4778]: I0312 13:36:56.698521 4778 generic.go:334] "Generic (PLEG): container finished" podID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerID="1e1f7bdc1b1f277f7f73a02d00233e5689179238b97d5a12c4ec486b0a81ef94" exitCode=0 Mar 12 13:36:56 crc kubenswrapper[4778]: I0312 13:36:56.698605 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-frvxx" event={"ID":"2e1d8894-7234-40d0-b42a-9d7ab1ce638a","Type":"ContainerDied","Data":"1e1f7bdc1b1f277f7f73a02d00233e5689179238b97d5a12c4ec486b0a81ef94"} Mar 12 13:36:56 crc kubenswrapper[4778]: I0312 13:36:56.701487 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566c4d5fc-dggmh" event={"ID":"7596a69e-33c9-4a2b-89fc-e4c41252b3fd","Type":"ContainerStarted","Data":"95d8fe1929b2a2cfa341a1e79e3ce52d2a315d8b00026b9096e54bcf440b46e1"} Mar 12 13:36:56 crc kubenswrapper[4778]: I0312 13:36:56.701526 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566c4d5fc-dggmh" event={"ID":"7596a69e-33c9-4a2b-89fc-e4c41252b3fd","Type":"ContainerStarted","Data":"2eb6ace63a707a07b12839eb74da3ba12196b862b500909424af3b42430d3b71"} Mar 12 13:36:57 crc kubenswrapper[4778]: I0312 13:36:57.097260 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-69b6dc4885-z4h9m" Mar 12 13:36:57 crc kubenswrapper[4778]: I0312 13:36:57.711443 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-566c4d5fc-dggmh" event={"ID":"7596a69e-33c9-4a2b-89fc-e4c41252b3fd","Type":"ContainerStarted","Data":"1517d69a357a59e29c554dbc1da03937cc2b1241837601f9ab3357c44fb10e2f"} Mar 12 13:36:57 crc kubenswrapper[4778]: I0312 13:36:57.712147 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:36:57 crc kubenswrapper[4778]: I0312 13:36:57.730201 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-566c4d5fc-dggmh" podStartSLOduration=3.73016016 podStartE2EDuration="3.73016016s" podCreationTimestamp="2026-03-12 13:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:36:57.725858837 +0000 UTC m=+1636.174554253" watchObservedRunningTime="2026-03-12 13:36:57.73016016 +0000 UTC m=+1636.178855576" Mar 12 13:37:01 crc kubenswrapper[4778]: I0312 13:37:01.687622 4778 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pn8tk" podUID="5e38a4fd-95f8-437b-923b-eca33b1387e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.70:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 12 13:37:04 crc kubenswrapper[4778]: I0312 13:37:04.915612 4778 generic.go:334] "Generic (PLEG): container finished" podID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerID="eb476b810cfe28d9d73622ddd41bf8c8fc415530e6ad67a1faffa32c9bd043ba" exitCode=0 Mar 12 13:37:04 crc kubenswrapper[4778]: I0312 13:37:04.915658 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-frvxx" event={"ID":"2e1d8894-7234-40d0-b42a-9d7ab1ce638a","Type":"ContainerDied","Data":"eb476b810cfe28d9d73622ddd41bf8c8fc415530e6ad67a1faffa32c9bd043ba"} Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.320845 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.385575 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-config\") pod \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.385697 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-httpd-config\") pod \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.385757 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-internal-tls-certs\") pod \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.385811 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-public-tls-certs\") pod \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.385927 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqjss\" (UniqueName: \"kubernetes.io/projected/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-kube-api-access-nqjss\") pod \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.385973 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-combined-ca-bundle\") pod \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.386092 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-ovndb-tls-certs\") pod \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\" (UID: \"2e1d8894-7234-40d0-b42a-9d7ab1ce638a\") " Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.392436 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2e1d8894-7234-40d0-b42a-9d7ab1ce638a" (UID: "2e1d8894-7234-40d0-b42a-9d7ab1ce638a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.392510 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-kube-api-access-nqjss" (OuterVolumeSpecName: "kube-api-access-nqjss") pod "2e1d8894-7234-40d0-b42a-9d7ab1ce638a" (UID: "2e1d8894-7234-40d0-b42a-9d7ab1ce638a"). InnerVolumeSpecName "kube-api-access-nqjss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.443295 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e1d8894-7234-40d0-b42a-9d7ab1ce638a" (UID: "2e1d8894-7234-40d0-b42a-9d7ab1ce638a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.445082 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-config" (OuterVolumeSpecName: "config") pod "2e1d8894-7234-40d0-b42a-9d7ab1ce638a" (UID: "2e1d8894-7234-40d0-b42a-9d7ab1ce638a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.447615 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2e1d8894-7234-40d0-b42a-9d7ab1ce638a" (UID: "2e1d8894-7234-40d0-b42a-9d7ab1ce638a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.454524 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2e1d8894-7234-40d0-b42a-9d7ab1ce638a" (UID: "2e1d8894-7234-40d0-b42a-9d7ab1ce638a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.468336 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2e1d8894-7234-40d0-b42a-9d7ab1ce638a" (UID: "2e1d8894-7234-40d0-b42a-9d7ab1ce638a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.488743 4778 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.488940 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.489029 4778 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.489108 4778 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.489217 4778 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.489307 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqjss\" (UniqueName: \"kubernetes.io/projected/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-kube-api-access-nqjss\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.489372 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1d8894-7234-40d0-b42a-9d7ab1ce638a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.929409 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-frvxx" event={"ID":"2e1d8894-7234-40d0-b42a-9d7ab1ce638a","Type":"ContainerDied","Data":"e2d775202948449d32b3e6f8c66299f17943aeca0f3f57c7b82f6f8283ff7095"} Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.929459 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-769c65dfd5-frvxx" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.929745 4778 scope.go:117] "RemoveContainer" containerID="1e1f7bdc1b1f277f7f73a02d00233e5689179238b97d5a12c4ec486b0a81ef94" Mar 12 13:37:05 crc kubenswrapper[4778]: I0312 13:37:05.975133 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-769c65dfd5-frvxx"] Mar 12 13:37:06 crc kubenswrapper[4778]: I0312 13:37:06.361129 4778 scope.go:117] "RemoveContainer" containerID="eb476b810cfe28d9d73622ddd41bf8c8fc415530e6ad67a1faffa32c9bd043ba" Mar 12 13:37:06 crc kubenswrapper[4778]: I0312 13:37:06.368714 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-769c65dfd5-frvxx"] Mar 12 13:37:07 crc kubenswrapper[4778]: I0312 13:37:07.254702 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:37:07 crc kubenswrapper[4778]: E0312 13:37:07.255755 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:37:08 crc kubenswrapper[4778]: I0312 13:37:08.267922 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" path="/var/lib/kubelet/pods/2e1d8894-7234-40d0-b42a-9d7ab1ce638a/volumes" Mar 12 13:37:15 crc kubenswrapper[4778]: I0312 13:37:15.909044 4778 scope.go:117] "RemoveContainer" containerID="6addcbc9f6e1bd0c36c2127749a9343943bce9503688868083bfb8596a8eda94" Mar 12 13:37:18 crc kubenswrapper[4778]: I0312 13:37:18.254656 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:37:18 crc kubenswrapper[4778]: E0312 13:37:18.255457 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:37:25 crc kubenswrapper[4778]: I0312 13:37:25.395514 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-566c4d5fc-dggmh" Mar 12 13:37:25 crc kubenswrapper[4778]: I0312 13:37:25.465232 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-769c65dfd5-t7d9g"] Mar 12 13:37:25 crc kubenswrapper[4778]: I0312 13:37:25.465522 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-769c65dfd5-t7d9g" podUID="e3118f8b-6bd2-4fba-8300-114513770916" containerName="neutron-api" containerID="cri-o://7559ac32cffd7eca339ac8d8d2f5491100a0167d9ce788c2eb95e805cc071cda" gracePeriod=30 Mar 12 13:37:25 crc kubenswrapper[4778]: I0312 13:37:25.465649 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-769c65dfd5-t7d9g" podUID="e3118f8b-6bd2-4fba-8300-114513770916" containerName="neutron-httpd" containerID="cri-o://6738f9dd946d748869f4b26f4030a90ea55b7a4599f29ac178ad859657a706f7" gracePeriod=30 Mar 12 13:37:26 crc kubenswrapper[4778]: I0312 13:37:26.134630 4778 generic.go:334] "Generic (PLEG): container finished" podID="e3118f8b-6bd2-4fba-8300-114513770916" containerID="6738f9dd946d748869f4b26f4030a90ea55b7a4599f29ac178ad859657a706f7" exitCode=0 Mar 12 13:37:26 crc kubenswrapper[4778]: I0312 13:37:26.134695 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-t7d9g" event={"ID":"e3118f8b-6bd2-4fba-8300-114513770916","Type":"ContainerDied","Data":"6738f9dd946d748869f4b26f4030a90ea55b7a4599f29ac178ad859657a706f7"} Mar 12 13:37:32 crc kubenswrapper[4778]: I0312 13:37:32.273668 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:37:32 crc kubenswrapper[4778]: E0312 13:37:32.274437 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.391541 4778 generic.go:334] "Generic (PLEG): container finished" podID="e3118f8b-6bd2-4fba-8300-114513770916" containerID="7559ac32cffd7eca339ac8d8d2f5491100a0167d9ce788c2eb95e805cc071cda" exitCode=0 Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.391616 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-t7d9g" event={"ID":"e3118f8b-6bd2-4fba-8300-114513770916","Type":"ContainerDied","Data":"7559ac32cffd7eca339ac8d8d2f5491100a0167d9ce788c2eb95e805cc071cda"} Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.808566 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.896706 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-combined-ca-bundle\") pod \"e3118f8b-6bd2-4fba-8300-114513770916\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.896757 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f76t\" (UniqueName: \"kubernetes.io/projected/e3118f8b-6bd2-4fba-8300-114513770916-kube-api-access-4f76t\") pod \"e3118f8b-6bd2-4fba-8300-114513770916\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.896817 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-ovndb-tls-certs\") pod \"e3118f8b-6bd2-4fba-8300-114513770916\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.896910 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-httpd-config\") pod \"e3118f8b-6bd2-4fba-8300-114513770916\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.896979 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-public-tls-certs\") pod \"e3118f8b-6bd2-4fba-8300-114513770916\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.897067 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-internal-tls-certs\") pod \"e3118f8b-6bd2-4fba-8300-114513770916\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.897206 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-config\") pod \"e3118f8b-6bd2-4fba-8300-114513770916\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.904246 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e3118f8b-6bd2-4fba-8300-114513770916" (UID: "e3118f8b-6bd2-4fba-8300-114513770916"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.905305 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3118f8b-6bd2-4fba-8300-114513770916-kube-api-access-4f76t" (OuterVolumeSpecName: "kube-api-access-4f76t") pod "e3118f8b-6bd2-4fba-8300-114513770916" (UID: "e3118f8b-6bd2-4fba-8300-114513770916"). InnerVolumeSpecName "kube-api-access-4f76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.948230 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e3118f8b-6bd2-4fba-8300-114513770916" (UID: "e3118f8b-6bd2-4fba-8300-114513770916"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.958733 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e3118f8b-6bd2-4fba-8300-114513770916" (UID: "e3118f8b-6bd2-4fba-8300-114513770916"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.962354 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3118f8b-6bd2-4fba-8300-114513770916" (UID: "e3118f8b-6bd2-4fba-8300-114513770916"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.969983 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-config" (OuterVolumeSpecName: "config") pod "e3118f8b-6bd2-4fba-8300-114513770916" (UID: "e3118f8b-6bd2-4fba-8300-114513770916"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.999302 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e3118f8b-6bd2-4fba-8300-114513770916" (UID: "e3118f8b-6bd2-4fba-8300-114513770916"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.999529 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-ovndb-tls-certs\") pod \"e3118f8b-6bd2-4fba-8300-114513770916\" (UID: \"e3118f8b-6bd2-4fba-8300-114513770916\") " Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.999957 4778 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:43 crc kubenswrapper[4778]: I0312 13:37:43.999977 4778 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:43.999989 4778 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.000001 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.000010 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.000020 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f76t\" (UniqueName: \"kubernetes.io/projected/e3118f8b-6bd2-4fba-8300-114513770916-kube-api-access-4f76t\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:44 crc kubenswrapper[4778]: W0312 13:37:44.000093 4778 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e3118f8b-6bd2-4fba-8300-114513770916/volumes/kubernetes.io~secret/ovndb-tls-certs Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.000105 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e3118f8b-6bd2-4fba-8300-114513770916" (UID: "e3118f8b-6bd2-4fba-8300-114513770916"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.101351 4778 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3118f8b-6bd2-4fba-8300-114513770916-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.401857 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-769c65dfd5-t7d9g" event={"ID":"e3118f8b-6bd2-4fba-8300-114513770916","Type":"ContainerDied","Data":"8300c5c0870d3a0dc15fa6bca84b387efeba0222c0e9b918971777a65c2fcb29"} Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.401911 4778 scope.go:117] "RemoveContainer" containerID="6738f9dd946d748869f4b26f4030a90ea55b7a4599f29ac178ad859657a706f7" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.401920 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-769c65dfd5-t7d9g" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.439684 4778 scope.go:117] "RemoveContainer" containerID="7559ac32cffd7eca339ac8d8d2f5491100a0167d9ce788c2eb95e805cc071cda" Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.444158 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-769c65dfd5-t7d9g"] Mar 12 13:37:44 crc kubenswrapper[4778]: I0312 13:37:44.453647 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-769c65dfd5-t7d9g"] Mar 12 13:37:46 crc kubenswrapper[4778]: I0312 13:37:46.254668 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:37:46 crc kubenswrapper[4778]: E0312 13:37:46.255332 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:37:46 crc kubenswrapper[4778]: I0312 13:37:46.265873 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3118f8b-6bd2-4fba-8300-114513770916" path="/var/lib/kubelet/pods/e3118f8b-6bd2-4fba-8300-114513770916/volumes" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.563697 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c5988475-bw257"] Mar 12 13:37:47 crc kubenswrapper[4778]: E0312 13:37:47.564491 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3118f8b-6bd2-4fba-8300-114513770916" containerName="neutron-httpd" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.564509 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3118f8b-6bd2-4fba-8300-114513770916" containerName="neutron-httpd" Mar 12 13:37:47 crc kubenswrapper[4778]: E0312 13:37:47.564526 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3118f8b-6bd2-4fba-8300-114513770916" containerName="neutron-api" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.564535 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3118f8b-6bd2-4fba-8300-114513770916" containerName="neutron-api" Mar 12 13:37:47 crc kubenswrapper[4778]: E0312 13:37:47.564562 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerName="neutron-api" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.564570 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerName="neutron-api" Mar 12 13:37:47 crc kubenswrapper[4778]: E0312 13:37:47.564581 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerName="neutron-httpd" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.564588 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerName="neutron-httpd" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.564797 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3118f8b-6bd2-4fba-8300-114513770916" containerName="neutron-api" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.564827 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerName="neutron-httpd" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.564839 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1d8894-7234-40d0-b42a-9d7ab1ce638a" containerName="neutron-api" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.564855 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3118f8b-6bd2-4fba-8300-114513770916" containerName="neutron-httpd" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.565973 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.568058 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.583273 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c5988475-bw257"] Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.672036 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.672092 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.672136 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.672312 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbdfk\" (UniqueName: \"kubernetes.io/projected/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-kube-api-access-cbdfk\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.672448 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-svc\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.672508 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-config\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.672617 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.774741 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbdfk\" (UniqueName: \"kubernetes.io/projected/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-kube-api-access-cbdfk\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.774822 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-svc\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.774862 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-config\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.776027 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-svc\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.776060 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.776085 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-config\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.776135 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.776261 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.776917 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.776986 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.777708 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.778482 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.777797 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.795712 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbdfk\" (UniqueName: \"kubernetes.io/projected/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-kube-api-access-cbdfk\") pod \"dnsmasq-dns-7c5988475-bw257\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:47 crc kubenswrapper[4778]: I0312 13:37:47.886703 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:48 crc kubenswrapper[4778]: I0312 13:37:48.350342 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c5988475-bw257"] Mar 12 13:37:48 crc kubenswrapper[4778]: I0312 13:37:48.466068 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5988475-bw257" event={"ID":"8ebe6a74-d22a-427c-b7a5-f4212457f7d3","Type":"ContainerStarted","Data":"c86a22334e8cef280e44974db1684c5e3e847838867268917e47cc90abe99154"} Mar 12 13:37:49 crc kubenswrapper[4778]: I0312 13:37:49.476066 4778 generic.go:334] "Generic (PLEG): container finished" podID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" containerID="8675bbbc50526e6bd6a4f0eaa32cd9b3df1cfc38839bff57f7cd99b7bd0da73b" exitCode=0 Mar 12 13:37:49 crc kubenswrapper[4778]: I0312 13:37:49.476146 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5988475-bw257" event={"ID":"8ebe6a74-d22a-427c-b7a5-f4212457f7d3","Type":"ContainerDied","Data":"8675bbbc50526e6bd6a4f0eaa32cd9b3df1cfc38839bff57f7cd99b7bd0da73b"} Mar 12 13:37:50 crc kubenswrapper[4778]: I0312 13:37:50.486872 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5988475-bw257" event={"ID":"8ebe6a74-d22a-427c-b7a5-f4212457f7d3","Type":"ContainerStarted","Data":"cb0a2bf0ec904178e0889259a4ca3a676df6891d1b234fde05f9bd7a8c828b69"} Mar 12 13:37:50 crc kubenswrapper[4778]: I0312 13:37:50.487055 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:57 crc kubenswrapper[4778]: I0312 13:37:57.888448 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:37:57 crc kubenswrapper[4778]: I0312 13:37:57.919501 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c5988475-bw257" podStartSLOduration=10.919477663 podStartE2EDuration="10.919477663s" podCreationTimestamp="2026-03-12 13:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:37:50.505256141 +0000 UTC m=+1688.953951547" watchObservedRunningTime="2026-03-12 13:37:57.919477663 +0000 UTC m=+1696.368173059" Mar 12 13:37:57 crc kubenswrapper[4778]: I0312 13:37:57.967932 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f59c7d6f9-7f6bj"] Mar 12 13:37:57 crc kubenswrapper[4778]: I0312 13:37:57.968286 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" podUID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" containerName="dnsmasq-dns" containerID="cri-o://80b9a94e51ace133a39bb4f360454c37e2be50602309d428d0792de3b24d2efc" gracePeriod=10 Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.114040 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f89cfcd7f-vk6h4"] Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.115802 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.194486 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f89cfcd7f-vk6h4"] Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.210399 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-dns-svc\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.210719 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xdg\" (UniqueName: \"kubernetes.io/projected/46f34397-57fe-425d-b69d-040f4384ac69-kube-api-access-d2xdg\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.210788 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-ovsdbserver-sb\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.210832 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.210886 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-dns-swift-storage-0\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.210922 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-config\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.210966 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-ovsdbserver-nb\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.313075 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-ovsdbserver-sb\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.313173 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.313261 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-dns-swift-storage-0\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.313297 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-config\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.313371 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-ovsdbserver-nb\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.313470 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-dns-svc\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.313512 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2xdg\" (UniqueName: \"kubernetes.io/projected/46f34397-57fe-425d-b69d-040f4384ac69-kube-api-access-d2xdg\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.317859 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-ovsdbserver-sb\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.318846 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.319461 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-dns-swift-storage-0\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.319880 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-ovsdbserver-nb\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.322717 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-config\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.322886 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f34397-57fe-425d-b69d-040f4384ac69-dns-svc\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.338268 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2xdg\" (UniqueName: \"kubernetes.io/projected/46f34397-57fe-425d-b69d-040f4384ac69-kube-api-access-d2xdg\") pod \"dnsmasq-dns-6f89cfcd7f-vk6h4\" (UID: \"46f34397-57fe-425d-b69d-040f4384ac69\") " pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.485877 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.569826 4778 generic.go:334] "Generic (PLEG): container finished" podID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" containerID="80b9a94e51ace133a39bb4f360454c37e2be50602309d428d0792de3b24d2efc" exitCode=0 Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.569872 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" event={"ID":"a677b5ba-f5d3-4310-ab6d-af0505e82a00","Type":"ContainerDied","Data":"80b9a94e51ace133a39bb4f360454c37e2be50602309d428d0792de3b24d2efc"} Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.569902 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" event={"ID":"a677b5ba-f5d3-4310-ab6d-af0505e82a00","Type":"ContainerDied","Data":"048876f254d8481a39bc4ba587f25ae5e4007ace7976831d743f8095461c0872"} Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.569916 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="048876f254d8481a39bc4ba587f25ae5e4007ace7976831d743f8095461c0872" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.603635 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.630911 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdn6k\" (UniqueName: \"kubernetes.io/projected/a677b5ba-f5d3-4310-ab6d-af0505e82a00-kube-api-access-tdn6k\") pod \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.630984 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-nb\") pod \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.631026 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-swift-storage-0\") pod \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.631295 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-config\") pod \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.631392 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-sb\") pod \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.631436 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-svc\") pod \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\" (UID: \"a677b5ba-f5d3-4310-ab6d-af0505e82a00\") " Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.639340 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a677b5ba-f5d3-4310-ab6d-af0505e82a00-kube-api-access-tdn6k" (OuterVolumeSpecName: "kube-api-access-tdn6k") pod "a677b5ba-f5d3-4310-ab6d-af0505e82a00" (UID: "a677b5ba-f5d3-4310-ab6d-af0505e82a00"). InnerVolumeSpecName "kube-api-access-tdn6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.723961 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a677b5ba-f5d3-4310-ab6d-af0505e82a00" (UID: "a677b5ba-f5d3-4310-ab6d-af0505e82a00"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.724432 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a677b5ba-f5d3-4310-ab6d-af0505e82a00" (UID: "a677b5ba-f5d3-4310-ab6d-af0505e82a00"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.724513 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a677b5ba-f5d3-4310-ab6d-af0505e82a00" (UID: "a677b5ba-f5d3-4310-ab6d-af0505e82a00"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.734536 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-config" (OuterVolumeSpecName: "config") pod "a677b5ba-f5d3-4310-ab6d-af0505e82a00" (UID: "a677b5ba-f5d3-4310-ab6d-af0505e82a00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.734609 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.734635 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdn6k\" (UniqueName: \"kubernetes.io/projected/a677b5ba-f5d3-4310-ab6d-af0505e82a00-kube-api-access-tdn6k\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.734646 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.734658 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.754084 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a677b5ba-f5d3-4310-ab6d-af0505e82a00" (UID: "a677b5ba-f5d3-4310-ab6d-af0505e82a00"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.826597 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f89cfcd7f-vk6h4"] Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.842391 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:58 crc kubenswrapper[4778]: I0312 13:37:58.842441 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a677b5ba-f5d3-4310-ab6d-af0505e82a00-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:37:59 crc kubenswrapper[4778]: I0312 13:37:59.581056 4778 generic.go:334] "Generic (PLEG): container finished" podID="46f34397-57fe-425d-b69d-040f4384ac69" containerID="6513764db666b5964f4fbddc07eb3ed2f92e5d293c8dd60deaf30f6de9a5e9bc" exitCode=0 Mar 12 13:37:59 crc kubenswrapper[4778]: I0312 13:37:59.581133 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" event={"ID":"46f34397-57fe-425d-b69d-040f4384ac69","Type":"ContainerDied","Data":"6513764db666b5964f4fbddc07eb3ed2f92e5d293c8dd60deaf30f6de9a5e9bc"} Mar 12 13:37:59 crc kubenswrapper[4778]: I0312 13:37:59.581519 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" event={"ID":"46f34397-57fe-425d-b69d-040f4384ac69","Type":"ContainerStarted","Data":"e21eb540c79359fc01e4bf7155c7750ecf08139d2b4e8d9fd78a7e1f58ecfaf7"} Mar 12 13:37:59 crc kubenswrapper[4778]: I0312 13:37:59.581559 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f59c7d6f9-7f6bj" Mar 12 13:37:59 crc kubenswrapper[4778]: I0312 13:37:59.824819 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f59c7d6f9-7f6bj"] Mar 12 13:37:59 crc kubenswrapper[4778]: I0312 13:37:59.837395 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f59c7d6f9-7f6bj"] Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.138647 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555378-7skl9"] Mar 12 13:38:00 crc kubenswrapper[4778]: E0312 13:38:00.139652 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" containerName="dnsmasq-dns" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.139785 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" containerName="dnsmasq-dns" Mar 12 13:38:00 crc kubenswrapper[4778]: E0312 13:38:00.139867 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" containerName="init" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.139939 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" containerName="init" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.140318 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" containerName="dnsmasq-dns" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.141240 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555378-7skl9" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.143943 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.144819 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.150450 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555378-7skl9"] Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.153175 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.265072 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a677b5ba-f5d3-4310-ab6d-af0505e82a00" path="/var/lib/kubelet/pods/a677b5ba-f5d3-4310-ab6d-af0505e82a00/volumes" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.274360 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvt7\" (UniqueName: \"kubernetes.io/projected/446002fc-0307-4c07-8744-630e76bee9aa-kube-api-access-tzvt7\") pod \"auto-csr-approver-29555378-7skl9\" (UID: \"446002fc-0307-4c07-8744-630e76bee9aa\") " pod="openshift-infra/auto-csr-approver-29555378-7skl9" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.376794 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzvt7\" (UniqueName: \"kubernetes.io/projected/446002fc-0307-4c07-8744-630e76bee9aa-kube-api-access-tzvt7\") pod \"auto-csr-approver-29555378-7skl9\" (UID: \"446002fc-0307-4c07-8744-630e76bee9aa\") " pod="openshift-infra/auto-csr-approver-29555378-7skl9" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.397040 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzvt7\" (UniqueName: \"kubernetes.io/projected/446002fc-0307-4c07-8744-630e76bee9aa-kube-api-access-tzvt7\") pod \"auto-csr-approver-29555378-7skl9\" (UID: \"446002fc-0307-4c07-8744-630e76bee9aa\") " pod="openshift-infra/auto-csr-approver-29555378-7skl9" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.464635 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555378-7skl9" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.592724 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" event={"ID":"46f34397-57fe-425d-b69d-040f4384ac69","Type":"ContainerStarted","Data":"07b29f3ba239b2b280d8de52637eaa5b72eefa66ae849802d64fff11f77e90e9"} Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.592954 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.615243 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" podStartSLOduration=2.61522824 podStartE2EDuration="2.61522824s" podCreationTimestamp="2026-03-12 13:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 13:38:00.60817855 +0000 UTC m=+1699.056873956" watchObservedRunningTime="2026-03-12 13:38:00.61522824 +0000 UTC m=+1699.063923636" Mar 12 13:38:00 crc kubenswrapper[4778]: I0312 13:38:00.919355 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555378-7skl9"] Mar 12 13:38:00 crc kubenswrapper[4778]: W0312 13:38:00.921916 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod446002fc_0307_4c07_8744_630e76bee9aa.slice/crio-73907911d4d2c85fde8f408e872b6090f45f3872f54b62d1d2ba385d502b113d WatchSource:0}: Error finding container 73907911d4d2c85fde8f408e872b6090f45f3872f54b62d1d2ba385d502b113d: Status 404 returned error can't find the container with id 73907911d4d2c85fde8f408e872b6090f45f3872f54b62d1d2ba385d502b113d Mar 12 13:38:01 crc kubenswrapper[4778]: I0312 13:38:01.253375 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:38:01 crc kubenswrapper[4778]: E0312 13:38:01.253955 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:38:01 crc kubenswrapper[4778]: I0312 13:38:01.604553 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555378-7skl9" event={"ID":"446002fc-0307-4c07-8744-630e76bee9aa","Type":"ContainerStarted","Data":"73907911d4d2c85fde8f408e872b6090f45f3872f54b62d1d2ba385d502b113d"} Mar 12 13:38:02 crc kubenswrapper[4778]: I0312 13:38:02.615174 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555378-7skl9" event={"ID":"446002fc-0307-4c07-8744-630e76bee9aa","Type":"ContainerStarted","Data":"bba17f86be2a56502271ccc560c6167ec323fcd74423bccb2a6479d1508bc7e8"} Mar 12 13:38:02 crc kubenswrapper[4778]: I0312 13:38:02.639559 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555378-7skl9" podStartSLOduration=1.520025169 podStartE2EDuration="2.639539472s" podCreationTimestamp="2026-03-12 13:38:00 +0000 UTC" firstStartedPulling="2026-03-12 13:38:00.924328236 +0000 UTC m=+1699.373023632" lastFinishedPulling="2026-03-12 13:38:02.043842499 +0000 UTC m=+1700.492537935" observedRunningTime="2026-03-12 13:38:02.629706172 +0000 UTC m=+1701.078401568" watchObservedRunningTime="2026-03-12 13:38:02.639539472 +0000 UTC m=+1701.088234868" Mar 12 13:38:03 crc kubenswrapper[4778]: I0312 13:38:03.633685 4778 generic.go:334] "Generic (PLEG): container finished" podID="446002fc-0307-4c07-8744-630e76bee9aa" containerID="bba17f86be2a56502271ccc560c6167ec323fcd74423bccb2a6479d1508bc7e8" exitCode=0 Mar 12 13:38:03 crc kubenswrapper[4778]: I0312 13:38:03.633941 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555378-7skl9" event={"ID":"446002fc-0307-4c07-8744-630e76bee9aa","Type":"ContainerDied","Data":"bba17f86be2a56502271ccc560c6167ec323fcd74423bccb2a6479d1508bc7e8"} Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.020908 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555378-7skl9" Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.085368 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzvt7\" (UniqueName: \"kubernetes.io/projected/446002fc-0307-4c07-8744-630e76bee9aa-kube-api-access-tzvt7\") pod \"446002fc-0307-4c07-8744-630e76bee9aa\" (UID: \"446002fc-0307-4c07-8744-630e76bee9aa\") " Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.102448 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446002fc-0307-4c07-8744-630e76bee9aa-kube-api-access-tzvt7" (OuterVolumeSpecName: "kube-api-access-tzvt7") pod "446002fc-0307-4c07-8744-630e76bee9aa" (UID: "446002fc-0307-4c07-8744-630e76bee9aa"). InnerVolumeSpecName "kube-api-access-tzvt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.187823 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzvt7\" (UniqueName: \"kubernetes.io/projected/446002fc-0307-4c07-8744-630e76bee9aa-kube-api-access-tzvt7\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.357299 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555372-rddbg"] Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.369904 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555372-rddbg"] Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.654076 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555378-7skl9" event={"ID":"446002fc-0307-4c07-8744-630e76bee9aa","Type":"ContainerDied","Data":"73907911d4d2c85fde8f408e872b6090f45f3872f54b62d1d2ba385d502b113d"} Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.654139 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73907911d4d2c85fde8f408e872b6090f45f3872f54b62d1d2ba385d502b113d" Mar 12 13:38:05 crc kubenswrapper[4778]: I0312 13:38:05.654236 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555378-7skl9" Mar 12 13:38:06 crc kubenswrapper[4778]: I0312 13:38:06.266528 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b7e295-a151-42b0-a8d6-d062d9a42e88" path="/var/lib/kubelet/pods/c0b7e295-a151-42b0-a8d6-d062d9a42e88/volumes" Mar 12 13:38:08 crc kubenswrapper[4778]: I0312 13:38:08.487354 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f89cfcd7f-vk6h4" Mar 12 13:38:08 crc kubenswrapper[4778]: I0312 13:38:08.543031 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c5988475-bw257"] Mar 12 13:38:08 crc kubenswrapper[4778]: I0312 13:38:08.557100 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c5988475-bw257" podUID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" containerName="dnsmasq-dns" containerID="cri-o://cb0a2bf0ec904178e0889259a4ca3a676df6891d1b234fde05f9bd7a8c828b69" gracePeriod=10 Mar 12 13:38:08 crc kubenswrapper[4778]: I0312 13:38:08.692112 4778 generic.go:334] "Generic (PLEG): container finished" podID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" containerID="cb0a2bf0ec904178e0889259a4ca3a676df6891d1b234fde05f9bd7a8c828b69" exitCode=0 Mar 12 13:38:08 crc kubenswrapper[4778]: I0312 13:38:08.692175 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5988475-bw257" event={"ID":"8ebe6a74-d22a-427c-b7a5-f4212457f7d3","Type":"ContainerDied","Data":"cb0a2bf0ec904178e0889259a4ca3a676df6891d1b234fde05f9bd7a8c828b69"} Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.059251 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.161073 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-openstack-edpm-ipam\") pod \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.161120 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbdfk\" (UniqueName: \"kubernetes.io/projected/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-kube-api-access-cbdfk\") pod \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.161244 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-swift-storage-0\") pod \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.161336 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-nb\") pod \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.161408 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-svc\") pod \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.161491 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-sb\") pod \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.162017 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-config\") pod \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\" (UID: \"8ebe6a74-d22a-427c-b7a5-f4212457f7d3\") " Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.173648 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-kube-api-access-cbdfk" (OuterVolumeSpecName: "kube-api-access-cbdfk") pod "8ebe6a74-d22a-427c-b7a5-f4212457f7d3" (UID: "8ebe6a74-d22a-427c-b7a5-f4212457f7d3"). InnerVolumeSpecName "kube-api-access-cbdfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.225905 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ebe6a74-d22a-427c-b7a5-f4212457f7d3" (UID: "8ebe6a74-d22a-427c-b7a5-f4212457f7d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.226266 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ebe6a74-d22a-427c-b7a5-f4212457f7d3" (UID: "8ebe6a74-d22a-427c-b7a5-f4212457f7d3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.228393 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ebe6a74-d22a-427c-b7a5-f4212457f7d3" (UID: "8ebe6a74-d22a-427c-b7a5-f4212457f7d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.243645 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ebe6a74-d22a-427c-b7a5-f4212457f7d3" (UID: "8ebe6a74-d22a-427c-b7a5-f4212457f7d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.244724 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "8ebe6a74-d22a-427c-b7a5-f4212457f7d3" (UID: "8ebe6a74-d22a-427c-b7a5-f4212457f7d3"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.254718 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-config" (OuterVolumeSpecName: "config") pod "8ebe6a74-d22a-427c-b7a5-f4212457f7d3" (UID: "8ebe6a74-d22a-427c-b7a5-f4212457f7d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.264663 4778 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.264711 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.264726 4778 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.264737 4778 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.264750 4778 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-config\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.264761 4778 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.264773 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbdfk\" (UniqueName: \"kubernetes.io/projected/8ebe6a74-d22a-427c-b7a5-f4212457f7d3-kube-api-access-cbdfk\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.701934 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5988475-bw257" event={"ID":"8ebe6a74-d22a-427c-b7a5-f4212457f7d3","Type":"ContainerDied","Data":"c86a22334e8cef280e44974db1684c5e3e847838867268917e47cc90abe99154"} Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.701968 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5988475-bw257" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.701994 4778 scope.go:117] "RemoveContainer" containerID="cb0a2bf0ec904178e0889259a4ca3a676df6891d1b234fde05f9bd7a8c828b69" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.744321 4778 scope.go:117] "RemoveContainer" containerID="8675bbbc50526e6bd6a4f0eaa32cd9b3df1cfc38839bff57f7cd99b7bd0da73b" Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.744836 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c5988475-bw257"] Mar 12 13:38:09 crc kubenswrapper[4778]: I0312 13:38:09.753919 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c5988475-bw257"] Mar 12 13:38:10 crc kubenswrapper[4778]: I0312 13:38:10.265038 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" path="/var/lib/kubelet/pods/8ebe6a74-d22a-427c-b7a5-f4212457f7d3/volumes" Mar 12 13:38:15 crc kubenswrapper[4778]: I0312 13:38:15.254413 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:38:15 crc kubenswrapper[4778]: E0312 13:38:15.255215 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:38:16 crc kubenswrapper[4778]: I0312 13:38:16.143167 4778 scope.go:117] "RemoveContainer" containerID="83e30e12aea92ff26adeced3b96dea20e98c42e4bd6fda29118e167bf1eeb711" Mar 12 13:38:16 crc kubenswrapper[4778]: I0312 13:38:16.184218 4778 scope.go:117] "RemoveContainer" containerID="f3f7a33c33e8b6e5c107976dcfe1137727c3f5d14f498dcea6e9df482aee564a" Mar 12 13:38:27 crc kubenswrapper[4778]: I0312 13:38:27.255039 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:38:27 crc kubenswrapper[4778]: E0312 13:38:27.256575 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.770702 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc"] Mar 12 13:38:32 crc kubenswrapper[4778]: E0312 13:38:32.772583 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446002fc-0307-4c07-8744-630e76bee9aa" containerName="oc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.772682 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="446002fc-0307-4c07-8744-630e76bee9aa" containerName="oc" Mar 12 13:38:32 crc kubenswrapper[4778]: E0312 13:38:32.772751 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" containerName="dnsmasq-dns" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.772813 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" containerName="dnsmasq-dns" Mar 12 13:38:32 crc kubenswrapper[4778]: E0312 13:38:32.772889 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" containerName="init" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.772949 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" containerName="init" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.773218 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="446002fc-0307-4c07-8744-630e76bee9aa" containerName="oc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.773312 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebe6a74-d22a-427c-b7a5-f4212457f7d3" containerName="dnsmasq-dns" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.774941 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.786234 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.786527 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.786608 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.786787 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.820356 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc"] Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.843554 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkjn\" (UniqueName: \"kubernetes.io/projected/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-kube-api-access-6qkjn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.843640 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.843673 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.843723 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.945498 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkjn\" (UniqueName: \"kubernetes.io/projected/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-kube-api-access-6qkjn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.945575 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.945617 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.945670 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.951855 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.952096 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.959843 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:32 crc kubenswrapper[4778]: I0312 13:38:32.963653 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkjn\" (UniqueName: \"kubernetes.io/projected/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-kube-api-access-6qkjn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:33 crc kubenswrapper[4778]: I0312 13:38:33.112722 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:33 crc kubenswrapper[4778]: I0312 13:38:33.643373 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc"] Mar 12 13:38:34 crc kubenswrapper[4778]: I0312 13:38:34.414838 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" event={"ID":"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb","Type":"ContainerStarted","Data":"033c4c36ae25cc7a6f8501b7708a8e4bd2044e9e6abb0ff12418fb82f4d87df3"} Mar 12 13:38:40 crc kubenswrapper[4778]: I0312 13:38:40.255818 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:38:40 crc kubenswrapper[4778]: E0312 13:38:40.256928 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:38:43 crc kubenswrapper[4778]: I0312 13:38:43.319952 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:38:44 crc kubenswrapper[4778]: I0312 13:38:44.560447 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" event={"ID":"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb","Type":"ContainerStarted","Data":"b669070b5f63e02f0b40f059f58f4f609a161a96501a6bb7535e2eedd63acfe3"} Mar 12 13:38:44 crc kubenswrapper[4778]: I0312 13:38:44.587782 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" podStartSLOduration=2.920033067 podStartE2EDuration="12.587760545s" podCreationTimestamp="2026-03-12 13:38:32 +0000 UTC" firstStartedPulling="2026-03-12 13:38:33.649759869 +0000 UTC m=+1732.098455265" lastFinishedPulling="2026-03-12 13:38:43.317487347 +0000 UTC m=+1741.766182743" observedRunningTime="2026-03-12 13:38:44.578981556 +0000 UTC m=+1743.027676962" watchObservedRunningTime="2026-03-12 13:38:44.587760545 +0000 UTC m=+1743.036455941" Mar 12 13:38:51 crc kubenswrapper[4778]: I0312 13:38:51.254456 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:38:51 crc kubenswrapper[4778]: E0312 13:38:51.255146 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:38:56 crc kubenswrapper[4778]: I0312 13:38:56.680660 4778 generic.go:334] "Generic (PLEG): container finished" podID="bd7ac6b4-5600-45ce-b0ea-199dd4baefcb" containerID="b669070b5f63e02f0b40f059f58f4f609a161a96501a6bb7535e2eedd63acfe3" exitCode=0 Mar 12 13:38:56 crc kubenswrapper[4778]: I0312 13:38:56.681283 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" event={"ID":"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb","Type":"ContainerDied","Data":"b669070b5f63e02f0b40f059f58f4f609a161a96501a6bb7535e2eedd63acfe3"} Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.127979 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.246700 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qkjn\" (UniqueName: \"kubernetes.io/projected/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-kube-api-access-6qkjn\") pod \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.247065 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-inventory\") pod \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.247092 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-ssh-key-openstack-edpm-ipam\") pod \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.247161 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-repo-setup-combined-ca-bundle\") pod \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\" (UID: \"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb\") " Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.252216 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "bd7ac6b4-5600-45ce-b0ea-199dd4baefcb" (UID: "bd7ac6b4-5600-45ce-b0ea-199dd4baefcb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.253068 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-kube-api-access-6qkjn" (OuterVolumeSpecName: "kube-api-access-6qkjn") pod "bd7ac6b4-5600-45ce-b0ea-199dd4baefcb" (UID: "bd7ac6b4-5600-45ce-b0ea-199dd4baefcb"). InnerVolumeSpecName "kube-api-access-6qkjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.277762 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bd7ac6b4-5600-45ce-b0ea-199dd4baefcb" (UID: "bd7ac6b4-5600-45ce-b0ea-199dd4baefcb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.279734 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-inventory" (OuterVolumeSpecName: "inventory") pod "bd7ac6b4-5600-45ce-b0ea-199dd4baefcb" (UID: "bd7ac6b4-5600-45ce-b0ea-199dd4baefcb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.349343 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qkjn\" (UniqueName: \"kubernetes.io/projected/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-kube-api-access-6qkjn\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.349375 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.349386 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.349398 4778 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7ac6b4-5600-45ce-b0ea-199dd4baefcb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.702318 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" event={"ID":"bd7ac6b4-5600-45ce-b0ea-199dd4baefcb","Type":"ContainerDied","Data":"033c4c36ae25cc7a6f8501b7708a8e4bd2044e9e6abb0ff12418fb82f4d87df3"} Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.702377 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="033c4c36ae25cc7a6f8501b7708a8e4bd2044e9e6abb0ff12418fb82f4d87df3" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.702738 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.873532 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx"] Mar 12 13:38:58 crc kubenswrapper[4778]: E0312 13:38:58.873945 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7ac6b4-5600-45ce-b0ea-199dd4baefcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.873965 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7ac6b4-5600-45ce-b0ea-199dd4baefcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.874195 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7ac6b4-5600-45ce-b0ea-199dd4baefcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.874847 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.877332 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.878263 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.879107 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.879254 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:38:58 crc kubenswrapper[4778]: I0312 13:38:58.888050 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx"] Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.064109 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.064222 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.064297 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqnl\" (UniqueName: \"kubernetes.io/projected/b99627a8-43d8-4f7d-90f7-530eda3c2213-kube-api-access-whqnl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.064393 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.165563 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.165637 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.165696 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqnl\" (UniqueName: \"kubernetes.io/projected/b99627a8-43d8-4f7d-90f7-530eda3c2213-kube-api-access-whqnl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.165795 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.169930 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.170927 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.171009 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.190931 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqnl\" (UniqueName: \"kubernetes.io/projected/b99627a8-43d8-4f7d-90f7-530eda3c2213-kube-api-access-whqnl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.225483 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:38:59 crc kubenswrapper[4778]: W0312 13:38:59.727565 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb99627a8_43d8_4f7d_90f7_530eda3c2213.slice/crio-dc6fa4d7e880c9ed1330cf9f9750b1850f8f0933c2e607fb21a3cab73809d93c WatchSource:0}: Error finding container dc6fa4d7e880c9ed1330cf9f9750b1850f8f0933c2e607fb21a3cab73809d93c: Status 404 returned error can't find the container with id dc6fa4d7e880c9ed1330cf9f9750b1850f8f0933c2e607fb21a3cab73809d93c Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.727839 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx"] Mar 12 13:38:59 crc kubenswrapper[4778]: I0312 13:38:59.731332 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:39:00 crc kubenswrapper[4778]: I0312 13:39:00.722780 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" event={"ID":"b99627a8-43d8-4f7d-90f7-530eda3c2213","Type":"ContainerStarted","Data":"2be88402a7dbb5865b055bb3ee4db9ccaf014ad6b4e21a2044aee944e26732ea"} Mar 12 13:39:00 crc kubenswrapper[4778]: I0312 13:39:00.723335 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" event={"ID":"b99627a8-43d8-4f7d-90f7-530eda3c2213","Type":"ContainerStarted","Data":"dc6fa4d7e880c9ed1330cf9f9750b1850f8f0933c2e607fb21a3cab73809d93c"} Mar 12 13:39:00 crc kubenswrapper[4778]: I0312 13:39:00.743832 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" podStartSLOduration=2.274380613 podStartE2EDuration="2.743811357s" podCreationTimestamp="2026-03-12 13:38:58 +0000 UTC" firstStartedPulling="2026-03-12 13:38:59.73109828 +0000 UTC m=+1758.179793666" lastFinishedPulling="2026-03-12 13:39:00.200529024 +0000 UTC m=+1758.649224410" observedRunningTime="2026-03-12 13:39:00.739661779 +0000 UTC m=+1759.188357175" watchObservedRunningTime="2026-03-12 13:39:00.743811357 +0000 UTC m=+1759.192506753" Mar 12 13:39:02 crc kubenswrapper[4778]: I0312 13:39:02.254213 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:39:02 crc kubenswrapper[4778]: E0312 13:39:02.254746 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:39:13 crc kubenswrapper[4778]: I0312 13:39:13.253349 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:39:13 crc kubenswrapper[4778]: E0312 13:39:13.254087 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:39:16 crc kubenswrapper[4778]: I0312 13:39:16.320672 4778 scope.go:117] "RemoveContainer" containerID="7c88372c4eebf35fa3a0e19eba355c02e9d34ad468328fc457e997e453d917f3" Mar 12 13:39:16 crc kubenswrapper[4778]: I0312 13:39:16.361508 4778 scope.go:117] "RemoveContainer" containerID="59b401343563918013d35a2531aae9f420a7a4077e0d31999372fd3e7e21e169" Mar 12 13:39:16 crc kubenswrapper[4778]: I0312 13:39:16.432406 4778 scope.go:117] "RemoveContainer" containerID="cc6fc61a82e88c3140b3629f45196f98ee08d5f2fdb0df9b40fe66806a0ccbfd" Mar 12 13:39:28 crc kubenswrapper[4778]: I0312 13:39:28.254720 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:39:28 crc kubenswrapper[4778]: E0312 13:39:28.255519 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:39:42 crc kubenswrapper[4778]: I0312 13:39:42.263822 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:39:42 crc kubenswrapper[4778]: E0312 13:39:42.265702 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:39:53 crc kubenswrapper[4778]: I0312 13:39:53.254204 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:39:53 crc kubenswrapper[4778]: E0312 13:39:53.255018 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.152897 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555380-n8mtp"] Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.154617 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.157011 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.157203 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.157667 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.164770 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555380-n8mtp"] Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.196453 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2jk\" (UniqueName: \"kubernetes.io/projected/69f54cc7-08e2-42c1-883d-316f1dac7621-kube-api-access-mh2jk\") pod \"auto-csr-approver-29555380-n8mtp\" (UID: \"69f54cc7-08e2-42c1-883d-316f1dac7621\") " pod="openshift-infra/auto-csr-approver-29555380-n8mtp" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.298880 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2jk\" (UniqueName: \"kubernetes.io/projected/69f54cc7-08e2-42c1-883d-316f1dac7621-kube-api-access-mh2jk\") pod \"auto-csr-approver-29555380-n8mtp\" (UID: \"69f54cc7-08e2-42c1-883d-316f1dac7621\") " pod="openshift-infra/auto-csr-approver-29555380-n8mtp" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.326260 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2jk\" (UniqueName: \"kubernetes.io/projected/69f54cc7-08e2-42c1-883d-316f1dac7621-kube-api-access-mh2jk\") pod \"auto-csr-approver-29555380-n8mtp\" (UID: \"69f54cc7-08e2-42c1-883d-316f1dac7621\") " pod="openshift-infra/auto-csr-approver-29555380-n8mtp" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.482264 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" Mar 12 13:40:00 crc kubenswrapper[4778]: I0312 13:40:00.959688 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555380-n8mtp"] Mar 12 13:40:01 crc kubenswrapper[4778]: I0312 13:40:01.283636 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" event={"ID":"69f54cc7-08e2-42c1-883d-316f1dac7621","Type":"ContainerStarted","Data":"4197c3d5790d7995c1ff07fcc4f70c668110a247508288640324f7eb413c8d5d"} Mar 12 13:40:04 crc kubenswrapper[4778]: I0312 13:40:04.324585 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" event={"ID":"69f54cc7-08e2-42c1-883d-316f1dac7621","Type":"ContainerStarted","Data":"625dea5df6820f4416903072a858eb0ac8d225248a71973001f9856768eaad43"} Mar 12 13:40:04 crc kubenswrapper[4778]: I0312 13:40:04.343600 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" podStartSLOduration=1.5175736149999999 podStartE2EDuration="4.343575935s" podCreationTimestamp="2026-03-12 13:40:00 +0000 UTC" firstStartedPulling="2026-03-12 13:40:00.964767631 +0000 UTC m=+1819.413463027" lastFinishedPulling="2026-03-12 13:40:03.790769951 +0000 UTC m=+1822.239465347" observedRunningTime="2026-03-12 13:40:04.336534065 +0000 UTC m=+1822.785229481" watchObservedRunningTime="2026-03-12 13:40:04.343575935 +0000 UTC m=+1822.792271331" Mar 12 13:40:05 crc kubenswrapper[4778]: I0312 13:40:05.335663 4778 generic.go:334] "Generic (PLEG): container finished" podID="69f54cc7-08e2-42c1-883d-316f1dac7621" containerID="625dea5df6820f4416903072a858eb0ac8d225248a71973001f9856768eaad43" exitCode=0 Mar 12 13:40:05 crc kubenswrapper[4778]: I0312 13:40:05.335723 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" event={"ID":"69f54cc7-08e2-42c1-883d-316f1dac7621","Type":"ContainerDied","Data":"625dea5df6820f4416903072a858eb0ac8d225248a71973001f9856768eaad43"} Mar 12 13:40:06 crc kubenswrapper[4778]: I0312 13:40:06.652799 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" Mar 12 13:40:06 crc kubenswrapper[4778]: I0312 13:40:06.733756 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh2jk\" (UniqueName: \"kubernetes.io/projected/69f54cc7-08e2-42c1-883d-316f1dac7621-kube-api-access-mh2jk\") pod \"69f54cc7-08e2-42c1-883d-316f1dac7621\" (UID: \"69f54cc7-08e2-42c1-883d-316f1dac7621\") " Mar 12 13:40:06 crc kubenswrapper[4778]: I0312 13:40:06.739364 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f54cc7-08e2-42c1-883d-316f1dac7621-kube-api-access-mh2jk" (OuterVolumeSpecName: "kube-api-access-mh2jk") pod "69f54cc7-08e2-42c1-883d-316f1dac7621" (UID: "69f54cc7-08e2-42c1-883d-316f1dac7621"). InnerVolumeSpecName "kube-api-access-mh2jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:40:06 crc kubenswrapper[4778]: I0312 13:40:06.837711 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh2jk\" (UniqueName: \"kubernetes.io/projected/69f54cc7-08e2-42c1-883d-316f1dac7621-kube-api-access-mh2jk\") on node \"crc\" DevicePath \"\"" Mar 12 13:40:07 crc kubenswrapper[4778]: I0312 13:40:07.254027 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:40:07 crc kubenswrapper[4778]: E0312 13:40:07.254304 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:40:07 crc kubenswrapper[4778]: I0312 13:40:07.355709 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" event={"ID":"69f54cc7-08e2-42c1-883d-316f1dac7621","Type":"ContainerDied","Data":"4197c3d5790d7995c1ff07fcc4f70c668110a247508288640324f7eb413c8d5d"} Mar 12 13:40:07 crc kubenswrapper[4778]: I0312 13:40:07.355762 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4197c3d5790d7995c1ff07fcc4f70c668110a247508288640324f7eb413c8d5d" Mar 12 13:40:07 crc kubenswrapper[4778]: I0312 13:40:07.355767 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555380-n8mtp" Mar 12 13:40:07 crc kubenswrapper[4778]: I0312 13:40:07.411295 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555374-lf8vj"] Mar 12 13:40:07 crc kubenswrapper[4778]: I0312 13:40:07.419734 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555374-lf8vj"] Mar 12 13:40:08 crc kubenswrapper[4778]: I0312 13:40:08.269634 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d627011-802e-4075-9c56-43373d4c368e" path="/var/lib/kubelet/pods/9d627011-802e-4075-9c56-43373d4c368e/volumes" Mar 12 13:40:16 crc kubenswrapper[4778]: I0312 13:40:16.498257 4778 scope.go:117] "RemoveContainer" containerID="d817d5a09b7856e71332e283d84fe3ea296ae040cb7e986cd73c433864a99c34" Mar 12 13:40:18 crc kubenswrapper[4778]: I0312 13:40:18.253993 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:40:18 crc kubenswrapper[4778]: E0312 13:40:18.254569 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:40:31 crc kubenswrapper[4778]: I0312 13:40:31.254052 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:40:31 crc kubenswrapper[4778]: E0312 13:40:31.254838 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:40:44 crc kubenswrapper[4778]: I0312 13:40:44.254497 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:40:44 crc kubenswrapper[4778]: E0312 13:40:44.255372 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:40:57 crc kubenswrapper[4778]: I0312 13:40:57.254587 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:40:57 crc kubenswrapper[4778]: E0312 13:40:57.255644 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:41:12 crc kubenswrapper[4778]: I0312 13:41:12.260041 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:41:12 crc kubenswrapper[4778]: E0312 13:41:12.260898 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:41:16 crc kubenswrapper[4778]: I0312 13:41:16.577396 4778 scope.go:117] "RemoveContainer" containerID="1e4e2a2aac1ba95c2fc03d3ae5822d197e179d60f0dbd976d4f6143a68eb2c2a" Mar 12 13:41:16 crc kubenswrapper[4778]: I0312 13:41:16.614162 4778 scope.go:117] "RemoveContainer" containerID="1ca532aa466af7c68cb8aa187e7cf3ea161e9610dcf97d902b18dad6b9250f81" Mar 12 13:41:16 crc kubenswrapper[4778]: I0312 13:41:16.662935 4778 scope.go:117] "RemoveContainer" containerID="8dce37445b314b16965ae024d78bbfd9bf5998d5da6305572acf12733671bc3d" Mar 12 13:41:16 crc kubenswrapper[4778]: I0312 13:41:16.704705 4778 scope.go:117] "RemoveContainer" containerID="c66167331bd74d9b577eb48b304f2f99e28d6904a5ae9cd088d4f17df80842e1" Mar 12 13:41:16 crc kubenswrapper[4778]: I0312 13:41:16.728144 4778 scope.go:117] "RemoveContainer" containerID="e6738e925b347d28a1e722ea04cdc7d88018005b75c56a3dec09b214b5752ae1" Mar 12 13:41:23 crc kubenswrapper[4778]: I0312 13:41:23.254630 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:41:23 crc kubenswrapper[4778]: E0312 13:41:23.255528 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:41:30 crc kubenswrapper[4778]: I0312 13:41:30.046269 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gccjh"] Mar 12 13:41:30 crc kubenswrapper[4778]: I0312 13:41:30.056301 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gccjh"] Mar 12 13:41:30 crc kubenswrapper[4778]: I0312 13:41:30.263705 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc051b32-4b28-4011-9a00-49caa730f074" path="/var/lib/kubelet/pods/fc051b32-4b28-4011-9a00-49caa730f074/volumes" Mar 12 13:41:32 crc kubenswrapper[4778]: I0312 13:41:32.044407 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-886c-account-create-update-c7kqb"] Mar 12 13:41:32 crc kubenswrapper[4778]: I0312 13:41:32.055129 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-886c-account-create-update-c7kqb"] Mar 12 13:41:32 crc kubenswrapper[4778]: I0312 13:41:32.265473 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b329f80-bb88-4c5c-91eb-24394cdcc492" path="/var/lib/kubelet/pods/7b329f80-bb88-4c5c-91eb-24394cdcc492/volumes" Mar 12 13:41:34 crc kubenswrapper[4778]: I0312 13:41:34.255523 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:41:35 crc kubenswrapper[4778]: I0312 13:41:35.243494 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"92d3dad2e98d7139cb748a76fe93295a7064a4a757626bc932a272018a133968"} Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.040669 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6816-account-create-update-574cj"] Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.055026 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-79rjc"] Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.070085 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-hpkvd"] Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.078107 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-79rjc"] Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.087359 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6816-account-create-update-574cj"] Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.096310 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-hpkvd"] Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.106014 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3148-account-create-update-zkztc"] Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.126348 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3148-account-create-update-zkztc"] Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.276170 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18cd7d9a-1f17-4797-a94f-4692b1180508" path="/var/lib/kubelet/pods/18cd7d9a-1f17-4797-a94f-4692b1180508/volumes" Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.277457 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="280f8bcd-f8e0-451d-8c9c-b733f2b62a23" path="/var/lib/kubelet/pods/280f8bcd-f8e0-451d-8c9c-b733f2b62a23/volumes" Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.278344 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d015b15d-96d2-4b95-9778-8f4175a840a1" path="/var/lib/kubelet/pods/d015b15d-96d2-4b95-9778-8f4175a840a1/volumes" Mar 12 13:41:36 crc kubenswrapper[4778]: I0312 13:41:36.279051 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e76971eb-34f0-4a33-b657-508e01eed5d1" path="/var/lib/kubelet/pods/e76971eb-34f0-4a33-b657-508e01eed5d1/volumes" Mar 12 13:41:46 crc kubenswrapper[4778]: I0312 13:41:46.038810 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7kt6z"] Mar 12 13:41:46 crc kubenswrapper[4778]: I0312 13:41:46.049435 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7kt6z"] Mar 12 13:41:46 crc kubenswrapper[4778]: I0312 13:41:46.270141 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5a0cd9-113c-4313-8d66-90487bd90cd3" path="/var/lib/kubelet/pods/dd5a0cd9-113c-4313-8d66-90487bd90cd3/volumes" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.149501 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555382-zbkfk"] Mar 12 13:42:00 crc kubenswrapper[4778]: E0312 13:42:00.150782 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f54cc7-08e2-42c1-883d-316f1dac7621" containerName="oc" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.150800 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f54cc7-08e2-42c1-883d-316f1dac7621" containerName="oc" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.151115 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f54cc7-08e2-42c1-883d-316f1dac7621" containerName="oc" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.152036 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555382-zbkfk" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.154844 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.154900 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.154848 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.161861 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555382-zbkfk"] Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.250643 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvg4\" (UniqueName: \"kubernetes.io/projected/832c789c-468c-400b-8d55-3072443e85ec-kube-api-access-hqvg4\") pod \"auto-csr-approver-29555382-zbkfk\" (UID: \"832c789c-468c-400b-8d55-3072443e85ec\") " pod="openshift-infra/auto-csr-approver-29555382-zbkfk" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.352580 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvg4\" (UniqueName: \"kubernetes.io/projected/832c789c-468c-400b-8d55-3072443e85ec-kube-api-access-hqvg4\") pod \"auto-csr-approver-29555382-zbkfk\" (UID: \"832c789c-468c-400b-8d55-3072443e85ec\") " pod="openshift-infra/auto-csr-approver-29555382-zbkfk" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.372025 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvg4\" (UniqueName: \"kubernetes.io/projected/832c789c-468c-400b-8d55-3072443e85ec-kube-api-access-hqvg4\") pod \"auto-csr-approver-29555382-zbkfk\" (UID: \"832c789c-468c-400b-8d55-3072443e85ec\") " pod="openshift-infra/auto-csr-approver-29555382-zbkfk" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.474726 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555382-zbkfk" Mar 12 13:42:00 crc kubenswrapper[4778]: I0312 13:42:00.926542 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555382-zbkfk"] Mar 12 13:42:01 crc kubenswrapper[4778]: I0312 13:42:01.515614 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555382-zbkfk" event={"ID":"832c789c-468c-400b-8d55-3072443e85ec","Type":"ContainerStarted","Data":"315a9e7ef8d430b73d5cc5b023ba6aa6002c603c03613a82421a60d82ea4b39c"} Mar 12 13:42:03 crc kubenswrapper[4778]: I0312 13:42:03.534152 4778 generic.go:334] "Generic (PLEG): container finished" podID="832c789c-468c-400b-8d55-3072443e85ec" containerID="7785d6a0c6670e984508e3f9d5cc59f211b972f130207a3fed5c63411c140ddc" exitCode=0 Mar 12 13:42:03 crc kubenswrapper[4778]: I0312 13:42:03.534239 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555382-zbkfk" event={"ID":"832c789c-468c-400b-8d55-3072443e85ec","Type":"ContainerDied","Data":"7785d6a0c6670e984508e3f9d5cc59f211b972f130207a3fed5c63411c140ddc"} Mar 12 13:42:05 crc kubenswrapper[4778]: I0312 13:42:05.204516 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555382-zbkfk" Mar 12 13:42:05 crc kubenswrapper[4778]: I0312 13:42:05.369393 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvg4\" (UniqueName: \"kubernetes.io/projected/832c789c-468c-400b-8d55-3072443e85ec-kube-api-access-hqvg4\") pod \"832c789c-468c-400b-8d55-3072443e85ec\" (UID: \"832c789c-468c-400b-8d55-3072443e85ec\") " Mar 12 13:42:05 crc kubenswrapper[4778]: I0312 13:42:05.391594 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832c789c-468c-400b-8d55-3072443e85ec-kube-api-access-hqvg4" (OuterVolumeSpecName: "kube-api-access-hqvg4") pod "832c789c-468c-400b-8d55-3072443e85ec" (UID: "832c789c-468c-400b-8d55-3072443e85ec"). InnerVolumeSpecName "kube-api-access-hqvg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:42:05 crc kubenswrapper[4778]: I0312 13:42:05.473078 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvg4\" (UniqueName: \"kubernetes.io/projected/832c789c-468c-400b-8d55-3072443e85ec-kube-api-access-hqvg4\") on node \"crc\" DevicePath \"\"" Mar 12 13:42:05 crc kubenswrapper[4778]: I0312 13:42:05.876884 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555382-zbkfk" event={"ID":"832c789c-468c-400b-8d55-3072443e85ec","Type":"ContainerDied","Data":"315a9e7ef8d430b73d5cc5b023ba6aa6002c603c03613a82421a60d82ea4b39c"} Mar 12 13:42:05 crc kubenswrapper[4778]: I0312 13:42:05.876922 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="315a9e7ef8d430b73d5cc5b023ba6aa6002c603c03613a82421a60d82ea4b39c" Mar 12 13:42:05 crc kubenswrapper[4778]: I0312 13:42:05.876983 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555382-zbkfk" Mar 12 13:42:06 crc kubenswrapper[4778]: I0312 13:42:06.280938 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555376-2bdpv"] Mar 12 13:42:06 crc kubenswrapper[4778]: I0312 13:42:06.288654 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555376-2bdpv"] Mar 12 13:42:08 crc kubenswrapper[4778]: I0312 13:42:08.268339 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f74db3c-fec4-452d-bfd6-8db9f766e0bc" path="/var/lib/kubelet/pods/1f74db3c-fec4-452d-bfd6-8db9f766e0bc/volumes" Mar 12 13:42:11 crc kubenswrapper[4778]: I0312 13:42:11.929303 4778 generic.go:334] "Generic (PLEG): container finished" podID="b99627a8-43d8-4f7d-90f7-530eda3c2213" containerID="2be88402a7dbb5865b055bb3ee4db9ccaf014ad6b4e21a2044aee944e26732ea" exitCode=0 Mar 12 13:42:11 crc kubenswrapper[4778]: I0312 13:42:11.929376 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" event={"ID":"b99627a8-43d8-4f7d-90f7-530eda3c2213","Type":"ContainerDied","Data":"2be88402a7dbb5865b055bb3ee4db9ccaf014ad6b4e21a2044aee944e26732ea"} Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.062070 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-gxsm6"] Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.074697 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-31ed-account-create-update-h8bhm"] Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.087014 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-nh9xs"] Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.099171 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-gxsm6"] Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.111625 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-nh9xs"] Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.126640 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-31ed-account-create-update-h8bhm"] Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.268415 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3800be73-3a09-42b6-8d01-592ccbc6aaa3" path="/var/lib/kubelet/pods/3800be73-3a09-42b6-8d01-592ccbc6aaa3/volumes" Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.269158 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b694c81-3b07-45a1-9ca1-1e47e7430f1f" path="/var/lib/kubelet/pods/4b694c81-3b07-45a1-9ca1-1e47e7430f1f/volumes" Mar 12 13:42:12 crc kubenswrapper[4778]: I0312 13:42:12.269828 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ff3988-1976-4049-8277-0acb36da44c5" path="/var/lib/kubelet/pods/79ff3988-1976-4049-8277-0acb36da44c5/volumes" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.348157 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.451084 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-bootstrap-combined-ca-bundle\") pod \"b99627a8-43d8-4f7d-90f7-530eda3c2213\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.451166 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whqnl\" (UniqueName: \"kubernetes.io/projected/b99627a8-43d8-4f7d-90f7-530eda3c2213-kube-api-access-whqnl\") pod \"b99627a8-43d8-4f7d-90f7-530eda3c2213\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.451319 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-ssh-key-openstack-edpm-ipam\") pod \"b99627a8-43d8-4f7d-90f7-530eda3c2213\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.451355 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-inventory\") pod \"b99627a8-43d8-4f7d-90f7-530eda3c2213\" (UID: \"b99627a8-43d8-4f7d-90f7-530eda3c2213\") " Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.458973 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99627a8-43d8-4f7d-90f7-530eda3c2213-kube-api-access-whqnl" (OuterVolumeSpecName: "kube-api-access-whqnl") pod "b99627a8-43d8-4f7d-90f7-530eda3c2213" (UID: "b99627a8-43d8-4f7d-90f7-530eda3c2213"). InnerVolumeSpecName "kube-api-access-whqnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.459595 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b99627a8-43d8-4f7d-90f7-530eda3c2213" (UID: "b99627a8-43d8-4f7d-90f7-530eda3c2213"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.482452 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-inventory" (OuterVolumeSpecName: "inventory") pod "b99627a8-43d8-4f7d-90f7-530eda3c2213" (UID: "b99627a8-43d8-4f7d-90f7-530eda3c2213"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.485160 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b99627a8-43d8-4f7d-90f7-530eda3c2213" (UID: "b99627a8-43d8-4f7d-90f7-530eda3c2213"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.553770 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.553806 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.553815 4778 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99627a8-43d8-4f7d-90f7-530eda3c2213-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.553823 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whqnl\" (UniqueName: \"kubernetes.io/projected/b99627a8-43d8-4f7d-90f7-530eda3c2213-kube-api-access-whqnl\") on node \"crc\" DevicePath \"\"" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.949531 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" event={"ID":"b99627a8-43d8-4f7d-90f7-530eda3c2213","Type":"ContainerDied","Data":"dc6fa4d7e880c9ed1330cf9f9750b1850f8f0933c2e607fb21a3cab73809d93c"} Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.949575 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx" Mar 12 13:42:13 crc kubenswrapper[4778]: I0312 13:42:13.949580 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc6fa4d7e880c9ed1330cf9f9750b1850f8f0933c2e607fb21a3cab73809d93c" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.062998 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx"] Mar 12 13:42:14 crc kubenswrapper[4778]: E0312 13:42:14.063746 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832c789c-468c-400b-8d55-3072443e85ec" containerName="oc" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.063772 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="832c789c-468c-400b-8d55-3072443e85ec" containerName="oc" Mar 12 13:42:14 crc kubenswrapper[4778]: E0312 13:42:14.063819 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99627a8-43d8-4f7d-90f7-530eda3c2213" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.063831 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99627a8-43d8-4f7d-90f7-530eda3c2213" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.064111 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99627a8-43d8-4f7d-90f7-530eda3c2213" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.064148 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="832c789c-468c-400b-8d55-3072443e85ec" containerName="oc" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.065139 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.070653 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.071827 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.072354 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.072492 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.097818 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx"] Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.166487 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.166938 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgvff\" (UniqueName: \"kubernetes.io/projected/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-kube-api-access-mgvff\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.167259 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.269628 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.269683 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgvff\" (UniqueName: \"kubernetes.io/projected/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-kube-api-access-mgvff\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.269838 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.276006 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.278971 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.291094 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgvff\" (UniqueName: \"kubernetes.io/projected/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-kube-api-access-mgvff\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2xksx\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.397924 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.931806 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx"] Mar 12 13:42:14 crc kubenswrapper[4778]: I0312 13:42:14.969368 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" event={"ID":"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61","Type":"ContainerStarted","Data":"ed030c36635eddb22d286222cfae401d298d4b5f63f4f8d2c4b74f64fe09cc1a"} Mar 12 13:42:15 crc kubenswrapper[4778]: I0312 13:42:15.049516 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-thsh7"] Mar 12 13:42:15 crc kubenswrapper[4778]: I0312 13:42:15.059403 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e25a-account-create-update-vs6zm"] Mar 12 13:42:15 crc kubenswrapper[4778]: I0312 13:42:15.071843 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-thsh7"] Mar 12 13:42:15 crc kubenswrapper[4778]: I0312 13:42:15.081193 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2abd-account-create-update-chtfz"] Mar 12 13:42:15 crc kubenswrapper[4778]: I0312 13:42:15.089550 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e25a-account-create-update-vs6zm"] Mar 12 13:42:15 crc kubenswrapper[4778]: I0312 13:42:15.097439 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2abd-account-create-update-chtfz"] Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.266900 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f8bb53-a8a8-448f-8f42-349232e383ec" path="/var/lib/kubelet/pods/31f8bb53-a8a8-448f-8f42-349232e383ec/volumes" Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.267679 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729468a8-fded-4564-96c8-471d3cf48825" path="/var/lib/kubelet/pods/729468a8-fded-4564-96c8-471d3cf48825/volumes" Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.268323 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9793dfb5-c2a5-4dc1-993d-9e024a810ce8" path="/var/lib/kubelet/pods/9793dfb5-c2a5-4dc1-993d-9e024a810ce8/volumes" Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.811339 4778 scope.go:117] "RemoveContainer" containerID="93602c5ae72cfd4f9a42c4921524905037c8077ce8260918d72d9601b072dd59" Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.853877 4778 scope.go:117] "RemoveContainer" containerID="70fc2c631648b6cf05ce7c564c8a25d897ce94ea350c4d6a8a0ccacb6c5f16b4" Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.894492 4778 scope.go:117] "RemoveContainer" containerID="af7a0409b1470d33d558b70c98a397f0b5c99782ac9578ab1f379f9cb685947f" Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.944101 4778 scope.go:117] "RemoveContainer" containerID="d8bdc9c2c4e5e8d5384ac13e3814d6ad0bf996923ba03462051d4c078107d461" Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.970722 4778 scope.go:117] "RemoveContainer" containerID="3bf3addaa75cf85838ea1739e9760ca68c0ed5921fd1bd5da9e4725715df9a99" Mar 12 13:42:16 crc kubenswrapper[4778]: I0312 13:42:16.998761 4778 scope.go:117] "RemoveContainer" containerID="2c45c4ddf823adba305999f51111b5e3abaff88105a2366fb93304b13b53f40d" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.020234 4778 scope.go:117] "RemoveContainer" containerID="a62186594073bc08d5194d8b9ce9a46d1a29b359b5ca56b7c0f8fed38f9c7470" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.050472 4778 scope.go:117] "RemoveContainer" containerID="5106184b767437cea31a6a61b3a1991b36587ddd28250ecc1207af703f368fda" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.054928 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" event={"ID":"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61","Type":"ContainerStarted","Data":"07b3a86e7eae89aab9028737c42527be6e9de56b23d4964dfd0e2644aa5dd557"} Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.080030 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" podStartSLOduration=1.952793172 podStartE2EDuration="3.080012558s" podCreationTimestamp="2026-03-12 13:42:14 +0000 UTC" firstStartedPulling="2026-03-12 13:42:14.942436625 +0000 UTC m=+1953.391132021" lastFinishedPulling="2026-03-12 13:42:16.069656011 +0000 UTC m=+1954.518351407" observedRunningTime="2026-03-12 13:42:17.07515684 +0000 UTC m=+1955.523852246" watchObservedRunningTime="2026-03-12 13:42:17.080012558 +0000 UTC m=+1955.528707954" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.092079 4778 scope.go:117] "RemoveContainer" containerID="8443f2894188b4c3d976d78d2d647409527ab07f04b215d8b647fc560059ba2f" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.160892 4778 scope.go:117] "RemoveContainer" containerID="f4bb8c6e00b5e03bcc01c6649d1104fc5ef38426458fa36f98588fb6167dbe07" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.215623 4778 scope.go:117] "RemoveContainer" containerID="13ffa46dd0ede6f8f4fd6e787f1d2948d8a5e96a8e47df52e40147817681f0f7" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.258988 4778 scope.go:117] "RemoveContainer" containerID="60b2242b65665faad21e5afc28edb1788f01dc784524abe26ac1b4cb9a5296a5" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.323002 4778 scope.go:117] "RemoveContainer" containerID="0e8b3287f4617d49763a5e13085485c7f1faa35a7b545d67c3db4b7ac7a3c06b" Mar 12 13:42:17 crc kubenswrapper[4778]: I0312 13:42:17.385568 4778 scope.go:117] "RemoveContainer" containerID="451301ebd2071510b670f3a924d5fcd2f28fbcc4aa60d4224906bca0e09aa5be" Mar 12 13:42:19 crc kubenswrapper[4778]: I0312 13:42:19.029336 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-57cfm"] Mar 12 13:42:19 crc kubenswrapper[4778]: I0312 13:42:19.076500 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-57cfm"] Mar 12 13:42:20 crc kubenswrapper[4778]: I0312 13:42:20.263638 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec77eae6-4dac-4535-b0d3-98bd3422e4de" path="/var/lib/kubelet/pods/ec77eae6-4dac-4535-b0d3-98bd3422e4de/volumes" Mar 12 13:42:28 crc kubenswrapper[4778]: I0312 13:42:28.037667 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xg6z4"] Mar 12 13:42:28 crc kubenswrapper[4778]: I0312 13:42:28.056696 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xg6z4"] Mar 12 13:42:28 crc kubenswrapper[4778]: I0312 13:42:28.266619 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befeb973-a1de-48f9-8de0-5559f75472dc" path="/var/lib/kubelet/pods/befeb973-a1de-48f9-8de0-5559f75472dc/volumes" Mar 12 13:42:51 crc kubenswrapper[4778]: I0312 13:42:51.060669 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6cvgs"] Mar 12 13:42:51 crc kubenswrapper[4778]: I0312 13:42:51.072465 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-zr86r"] Mar 12 13:42:51 crc kubenswrapper[4778]: I0312 13:42:51.083674 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-zr86r"] Mar 12 13:42:51 crc kubenswrapper[4778]: I0312 13:42:51.096224 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6cvgs"] Mar 12 13:42:52 crc kubenswrapper[4778]: I0312 13:42:52.268258 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f8f940-670d-47a0-a90a-afd3aa37a726" path="/var/lib/kubelet/pods/76f8f940-670d-47a0-a90a-afd3aa37a726/volumes" Mar 12 13:42:52 crc kubenswrapper[4778]: I0312 13:42:52.270074 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faeb9cb3-46ae-428f-8c0e-538a2e552072" path="/var/lib/kubelet/pods/faeb9cb3-46ae-428f-8c0e-538a2e552072/volumes" Mar 12 13:42:59 crc kubenswrapper[4778]: I0312 13:42:59.037933 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-56sfj"] Mar 12 13:42:59 crc kubenswrapper[4778]: I0312 13:42:59.047495 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-56sfj"] Mar 12 13:43:00 crc kubenswrapper[4778]: I0312 13:43:00.265394 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af573ef-51c3-4bfc-8de6-eb1be8b75c76" path="/var/lib/kubelet/pods/1af573ef-51c3-4bfc-8de6-eb1be8b75c76/volumes" Mar 12 13:43:04 crc kubenswrapper[4778]: I0312 13:43:04.040989 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-p59s9"] Mar 12 13:43:04 crc kubenswrapper[4778]: I0312 13:43:04.051506 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-p59s9"] Mar 12 13:43:04 crc kubenswrapper[4778]: I0312 13:43:04.266585 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a682334f-73c0-4e38-8f95-e5de661319bb" path="/var/lib/kubelet/pods/a682334f-73c0-4e38-8f95-e5de661319bb/volumes" Mar 12 13:43:13 crc kubenswrapper[4778]: I0312 13:43:13.026243 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-d5pl9"] Mar 12 13:43:13 crc kubenswrapper[4778]: I0312 13:43:13.034449 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-d5pl9"] Mar 12 13:43:14 crc kubenswrapper[4778]: I0312 13:43:14.265576 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb110a1e-6281-437d-b857-eb79c4953e1a" path="/var/lib/kubelet/pods/bb110a1e-6281-437d-b857-eb79c4953e1a/volumes" Mar 12 13:43:17 crc kubenswrapper[4778]: I0312 13:43:17.709639 4778 scope.go:117] "RemoveContainer" containerID="fc1fdc3b0586065e85920687a0b5a3f3a3005e79a719fda2a25493dca50c853e" Mar 12 13:43:17 crc kubenswrapper[4778]: I0312 13:43:17.747622 4778 scope.go:117] "RemoveContainer" containerID="80b9a94e51ace133a39bb4f360454c37e2be50602309d428d0792de3b24d2efc" Mar 12 13:43:17 crc kubenswrapper[4778]: I0312 13:43:17.782486 4778 scope.go:117] "RemoveContainer" containerID="58438369e99b6009fb9ed545548de66fcc857634b3821d960d6e5735646c9d5c" Mar 12 13:43:17 crc kubenswrapper[4778]: I0312 13:43:17.818207 4778 scope.go:117] "RemoveContainer" containerID="86b41f2ea1c3794ed3e1fc975ecb18420f64bbd7611743de1aa319532e575758" Mar 12 13:43:17 crc kubenswrapper[4778]: I0312 13:43:17.871476 4778 scope.go:117] "RemoveContainer" containerID="434f9dbc426c8bc5145f54de2b34c16cd91006660bd978fe7ad9311fc8579e69" Mar 12 13:43:17 crc kubenswrapper[4778]: I0312 13:43:17.912091 4778 scope.go:117] "RemoveContainer" containerID="0a63a4d47752d25e9f6c0d6aa9ed71121a4afe876250e6e10c1c1091bf2b8d8f" Mar 12 13:43:17 crc kubenswrapper[4778]: I0312 13:43:17.957125 4778 scope.go:117] "RemoveContainer" containerID="4711a6f852c8bf6a8fa62e985008d918b7971ec55784fb38d2f086199f1f3aee" Mar 12 13:43:18 crc kubenswrapper[4778]: I0312 13:43:18.008585 4778 scope.go:117] "RemoveContainer" containerID="5a74043e2f16e3024a4f2ed6f0c9502985ad493a8f1362a42f34265b2e50d313" Mar 12 13:43:18 crc kubenswrapper[4778]: I0312 13:43:18.036293 4778 scope.go:117] "RemoveContainer" containerID="710035f2fd1c6ce07427dd61579057ea7d418eb1c9532e9c2ad2d414dc76cbb9" Mar 12 13:43:58 crc kubenswrapper[4778]: I0312 13:43:58.557683 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:43:58 crc kubenswrapper[4778]: I0312 13:43:58.558225 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.145100 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555384-znhr8"] Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.147255 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555384-znhr8" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.150032 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.150071 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.150370 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.160397 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555384-znhr8"] Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.252333 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs6fg\" (UniqueName: \"kubernetes.io/projected/70dc8f5a-da90-4090-b630-a6a7bd438f64-kube-api-access-fs6fg\") pod \"auto-csr-approver-29555384-znhr8\" (UID: \"70dc8f5a-da90-4090-b630-a6a7bd438f64\") " pod="openshift-infra/auto-csr-approver-29555384-znhr8" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.354991 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs6fg\" (UniqueName: \"kubernetes.io/projected/70dc8f5a-da90-4090-b630-a6a7bd438f64-kube-api-access-fs6fg\") pod \"auto-csr-approver-29555384-znhr8\" (UID: \"70dc8f5a-da90-4090-b630-a6a7bd438f64\") " pod="openshift-infra/auto-csr-approver-29555384-znhr8" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.380032 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs6fg\" (UniqueName: \"kubernetes.io/projected/70dc8f5a-da90-4090-b630-a6a7bd438f64-kube-api-access-fs6fg\") pod \"auto-csr-approver-29555384-znhr8\" (UID: \"70dc8f5a-da90-4090-b630-a6a7bd438f64\") " pod="openshift-infra/auto-csr-approver-29555384-znhr8" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.477140 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555384-znhr8" Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.967098 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555384-znhr8"] Mar 12 13:44:00 crc kubenswrapper[4778]: I0312 13:44:00.971318 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:44:01 crc kubenswrapper[4778]: I0312 13:44:01.050047 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sckbb"] Mar 12 13:44:01 crc kubenswrapper[4778]: I0312 13:44:01.060730 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sckbb"] Mar 12 13:44:01 crc kubenswrapper[4778]: I0312 13:44:01.971629 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555384-znhr8" event={"ID":"70dc8f5a-da90-4090-b630-a6a7bd438f64","Type":"ContainerStarted","Data":"cc8b1411f01c8677b6286c4be7cd084d142c4ecfab7deab72e77cb0b5ddcc743"} Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.049915 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-94ac-account-create-update-rxvgg"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.060438 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-2dh9w"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.069168 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-94ac-account-create-update-rxvgg"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.078117 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-2dh9w"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.086542 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-dcf9-account-create-update-2rmjd"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.094684 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-x8nht"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.103102 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7d0f-account-create-update-t2rrl"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.111636 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7d0f-account-create-update-t2rrl"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.119480 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-x8nht"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.126335 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-dcf9-account-create-update-2rmjd"] Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.267832 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068c02bc-1daf-4029-84f9-39a395d5de3e" path="/var/lib/kubelet/pods/068c02bc-1daf-4029-84f9-39a395d5de3e/volumes" Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.269341 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092c3556-0255-4e2f-b2c7-e22b8a3d8418" path="/var/lib/kubelet/pods/092c3556-0255-4e2f-b2c7-e22b8a3d8418/volumes" Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.270058 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ab681f-51c2-4723-b5b6-58c841185455" path="/var/lib/kubelet/pods/20ab681f-51c2-4723-b5b6-58c841185455/volumes" Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.270756 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d11f6c3-3911-4a29-a65d-ef1f570d9b02" path="/var/lib/kubelet/pods/2d11f6c3-3911-4a29-a65d-ef1f570d9b02/volumes" Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.275900 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad4ff5d-b816-4bdd-97a7-8afd73afe583" path="/var/lib/kubelet/pods/4ad4ff5d-b816-4bdd-97a7-8afd73afe583/volumes" Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.276633 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbbc5fa-b903-4296-a3af-75524920938d" path="/var/lib/kubelet/pods/9dbbc5fa-b903-4296-a3af-75524920938d/volumes" Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.983458 4778 generic.go:334] "Generic (PLEG): container finished" podID="70dc8f5a-da90-4090-b630-a6a7bd438f64" containerID="e97aad250ae3960e7483df5290e0221b9fbbbe6a75ec4afcb92fd5c46ee60b01" exitCode=0 Mar 12 13:44:02 crc kubenswrapper[4778]: I0312 13:44:02.983510 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555384-znhr8" event={"ID":"70dc8f5a-da90-4090-b630-a6a7bd438f64","Type":"ContainerDied","Data":"e97aad250ae3960e7483df5290e0221b9fbbbe6a75ec4afcb92fd5c46ee60b01"} Mar 12 13:44:04 crc kubenswrapper[4778]: I0312 13:44:04.399235 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555384-znhr8" Mar 12 13:44:04 crc kubenswrapper[4778]: I0312 13:44:04.440004 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs6fg\" (UniqueName: \"kubernetes.io/projected/70dc8f5a-da90-4090-b630-a6a7bd438f64-kube-api-access-fs6fg\") pod \"70dc8f5a-da90-4090-b630-a6a7bd438f64\" (UID: \"70dc8f5a-da90-4090-b630-a6a7bd438f64\") " Mar 12 13:44:04 crc kubenswrapper[4778]: I0312 13:44:04.446451 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70dc8f5a-da90-4090-b630-a6a7bd438f64-kube-api-access-fs6fg" (OuterVolumeSpecName: "kube-api-access-fs6fg") pod "70dc8f5a-da90-4090-b630-a6a7bd438f64" (UID: "70dc8f5a-da90-4090-b630-a6a7bd438f64"). InnerVolumeSpecName "kube-api-access-fs6fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:44:04 crc kubenswrapper[4778]: I0312 13:44:04.542793 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs6fg\" (UniqueName: \"kubernetes.io/projected/70dc8f5a-da90-4090-b630-a6a7bd438f64-kube-api-access-fs6fg\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:05 crc kubenswrapper[4778]: I0312 13:44:05.005673 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555384-znhr8" event={"ID":"70dc8f5a-da90-4090-b630-a6a7bd438f64","Type":"ContainerDied","Data":"cc8b1411f01c8677b6286c4be7cd084d142c4ecfab7deab72e77cb0b5ddcc743"} Mar 12 13:44:05 crc kubenswrapper[4778]: I0312 13:44:05.005713 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555384-znhr8" Mar 12 13:44:05 crc kubenswrapper[4778]: I0312 13:44:05.005730 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8b1411f01c8677b6286c4be7cd084d142c4ecfab7deab72e77cb0b5ddcc743" Mar 12 13:44:05 crc kubenswrapper[4778]: I0312 13:44:05.457417 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555378-7skl9"] Mar 12 13:44:05 crc kubenswrapper[4778]: I0312 13:44:05.467703 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555378-7skl9"] Mar 12 13:44:06 crc kubenswrapper[4778]: I0312 13:44:06.266015 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446002fc-0307-4c07-8744-630e76bee9aa" path="/var/lib/kubelet/pods/446002fc-0307-4c07-8744-630e76bee9aa/volumes" Mar 12 13:44:10 crc kubenswrapper[4778]: I0312 13:44:10.043297 4778 generic.go:334] "Generic (PLEG): container finished" podID="96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61" containerID="07b3a86e7eae89aab9028737c42527be6e9de56b23d4964dfd0e2644aa5dd557" exitCode=0 Mar 12 13:44:10 crc kubenswrapper[4778]: I0312 13:44:10.043350 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" event={"ID":"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61","Type":"ContainerDied","Data":"07b3a86e7eae89aab9028737c42527be6e9de56b23d4964dfd0e2644aa5dd557"} Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.445142 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.582307 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-inventory\") pod \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.582396 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgvff\" (UniqueName: \"kubernetes.io/projected/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-kube-api-access-mgvff\") pod \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.582528 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-ssh-key-openstack-edpm-ipam\") pod \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\" (UID: \"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61\") " Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.588530 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-kube-api-access-mgvff" (OuterVolumeSpecName: "kube-api-access-mgvff") pod "96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61" (UID: "96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61"). InnerVolumeSpecName "kube-api-access-mgvff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.612248 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61" (UID: "96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.613362 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-inventory" (OuterVolumeSpecName: "inventory") pod "96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61" (UID: "96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.685443 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.685499 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:11 crc kubenswrapper[4778]: I0312 13:44:11.685512 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgvff\" (UniqueName: \"kubernetes.io/projected/96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61-kube-api-access-mgvff\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.062107 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" event={"ID":"96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61","Type":"ContainerDied","Data":"ed030c36635eddb22d286222cfae401d298d4b5f63f4f8d2c4b74f64fe09cc1a"} Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.062523 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed030c36635eddb22d286222cfae401d298d4b5f63f4f8d2c4b74f64fe09cc1a" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.062278 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2xksx" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.174660 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl"] Mar 12 13:44:12 crc kubenswrapper[4778]: E0312 13:44:12.175126 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70dc8f5a-da90-4090-b630-a6a7bd438f64" containerName="oc" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.175150 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="70dc8f5a-da90-4090-b630-a6a7bd438f64" containerName="oc" Mar 12 13:44:12 crc kubenswrapper[4778]: E0312 13:44:12.175201 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.175213 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.175424 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.175463 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="70dc8f5a-da90-4090-b630-a6a7bd438f64" containerName="oc" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.176248 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.178796 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.180626 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.180848 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.182307 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.187693 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl"] Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.301275 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntxg\" (UniqueName: \"kubernetes.io/projected/5c5541f3-fb44-476b-91c2-b07dffe50894-kube-api-access-lntxg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.301358 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.301415 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.403234 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lntxg\" (UniqueName: \"kubernetes.io/projected/5c5541f3-fb44-476b-91c2-b07dffe50894-kube-api-access-lntxg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.403351 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.403440 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.409474 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.409687 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.424985 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lntxg\" (UniqueName: \"kubernetes.io/projected/5c5541f3-fb44-476b-91c2-b07dffe50894-kube-api-access-lntxg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4szjl\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:12 crc kubenswrapper[4778]: I0312 13:44:12.502316 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:44:13 crc kubenswrapper[4778]: I0312 13:44:13.025521 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl"] Mar 12 13:44:13 crc kubenswrapper[4778]: I0312 13:44:13.076931 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" event={"ID":"5c5541f3-fb44-476b-91c2-b07dffe50894","Type":"ContainerStarted","Data":"2a03fc2efe0354ad242b25364cbd2eba2b14ebdbcdcea201fdb32f5a13b5d430"} Mar 12 13:44:15 crc kubenswrapper[4778]: I0312 13:44:15.096485 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" event={"ID":"5c5541f3-fb44-476b-91c2-b07dffe50894","Type":"ContainerStarted","Data":"f8a3ca31df85cf9da8faf63b19eca49715e39783019bbe431b8fd2f9f7f72be3"} Mar 12 13:44:15 crc kubenswrapper[4778]: I0312 13:44:15.114964 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" podStartSLOduration=1.747757874 podStartE2EDuration="3.114945926s" podCreationTimestamp="2026-03-12 13:44:12 +0000 UTC" firstStartedPulling="2026-03-12 13:44:13.020108487 +0000 UTC m=+2071.468803883" lastFinishedPulling="2026-03-12 13:44:14.387296539 +0000 UTC m=+2072.835991935" observedRunningTime="2026-03-12 13:44:15.112155557 +0000 UTC m=+2073.560850953" watchObservedRunningTime="2026-03-12 13:44:15.114945926 +0000 UTC m=+2073.563641322" Mar 12 13:44:18 crc kubenswrapper[4778]: I0312 13:44:18.211846 4778 scope.go:117] "RemoveContainer" containerID="bc734e634b97b1a5646716a6fc635d874255724a3ef890cee0802c7190db7d7c" Mar 12 13:44:18 crc kubenswrapper[4778]: I0312 13:44:18.246443 4778 scope.go:117] "RemoveContainer" containerID="bba17f86be2a56502271ccc560c6167ec323fcd74423bccb2a6479d1508bc7e8" Mar 12 13:44:18 crc kubenswrapper[4778]: I0312 13:44:18.316776 4778 scope.go:117] "RemoveContainer" containerID="be1aecf0f9c3a392b6320f1bb26caafd070dc71ba9db9be7a31ee5daf79e1a2d" Mar 12 13:44:18 crc kubenswrapper[4778]: I0312 13:44:18.382582 4778 scope.go:117] "RemoveContainer" containerID="099862fee239f9d58b6485a586d53c0613281de24cf1629f41917394af426901" Mar 12 13:44:18 crc kubenswrapper[4778]: I0312 13:44:18.408616 4778 scope.go:117] "RemoveContainer" containerID="2919ec7bf1dcd65b4aaec3b3c75478bba66c6d492f7b5c0064c9c993485c3e21" Mar 12 13:44:18 crc kubenswrapper[4778]: I0312 13:44:18.466577 4778 scope.go:117] "RemoveContainer" containerID="e2a8d1e05ff7ff80a86b71f26e5fb5c7484878b8a9632420829088d85ad0fbaf" Mar 12 13:44:18 crc kubenswrapper[4778]: I0312 13:44:18.528281 4778 scope.go:117] "RemoveContainer" containerID="37e7dd5198914cc6a22b8658dd88edbdbdabb2bfe43c9c4d07a686c73a997ca2" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.660303 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cfmvx"] Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.664589 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.677936 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfmvx"] Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.755240 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8php4\" (UniqueName: \"kubernetes.io/projected/76d361e0-0808-41e0-a659-d9977bce86de-kube-api-access-8php4\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.755400 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-catalog-content\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.755528 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-utilities\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.858201 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-utilities\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.858289 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8php4\" (UniqueName: \"kubernetes.io/projected/76d361e0-0808-41e0-a659-d9977bce86de-kube-api-access-8php4\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.858428 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-catalog-content\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.858762 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-utilities\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.859136 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-catalog-content\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.878398 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8php4\" (UniqueName: \"kubernetes.io/projected/76d361e0-0808-41e0-a659-d9977bce86de-kube-api-access-8php4\") pod \"certified-operators-cfmvx\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:19 crc kubenswrapper[4778]: I0312 13:44:19.992403 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:20 crc kubenswrapper[4778]: I0312 13:44:20.322359 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfmvx"] Mar 12 13:44:21 crc kubenswrapper[4778]: I0312 13:44:21.149991 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfmvx" event={"ID":"76d361e0-0808-41e0-a659-d9977bce86de","Type":"ContainerStarted","Data":"af4ac1b2a11604b423a1cb99788d39c58084666f21c326e7b0bc198fdab855e6"} Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.053092 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zbbfg"] Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.055042 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.066516 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbbfg"] Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.159385 4778 generic.go:334] "Generic (PLEG): container finished" podID="76d361e0-0808-41e0-a659-d9977bce86de" containerID="28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e" exitCode=0 Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.159447 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfmvx" event={"ID":"76d361e0-0808-41e0-a659-d9977bce86de","Type":"ContainerDied","Data":"28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e"} Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.224769 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-utilities\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.224851 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6fn\" (UniqueName: \"kubernetes.io/projected/c8cc55b1-e6ed-4790-886c-fabe5917bf27-kube-api-access-8d6fn\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.225342 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-catalog-content\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.327322 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-catalog-content\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.327644 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-utilities\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.327674 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6fn\" (UniqueName: \"kubernetes.io/projected/c8cc55b1-e6ed-4790-886c-fabe5917bf27-kube-api-access-8d6fn\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.327835 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-catalog-content\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.328117 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-utilities\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.356031 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6fn\" (UniqueName: \"kubernetes.io/projected/c8cc55b1-e6ed-4790-886c-fabe5917bf27-kube-api-access-8d6fn\") pod \"community-operators-zbbfg\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.376270 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.660310 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbbw"] Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.664134 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.671813 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbbw"] Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.843754 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-utilities\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.843998 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wnw4\" (UniqueName: \"kubernetes.io/projected/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-kube-api-access-2wnw4\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.844045 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-catalog-content\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.900975 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbbfg"] Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.946306 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-utilities\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.946486 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wnw4\" (UniqueName: \"kubernetes.io/projected/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-kube-api-access-2wnw4\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.946531 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-catalog-content\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.947101 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-catalog-content\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.947474 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-utilities\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:22 crc kubenswrapper[4778]: I0312 13:44:22.965304 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wnw4\" (UniqueName: \"kubernetes.io/projected/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-kube-api-access-2wnw4\") pod \"redhat-marketplace-tsbbw\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:23 crc kubenswrapper[4778]: I0312 13:44:23.001167 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:23 crc kubenswrapper[4778]: I0312 13:44:23.187025 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbbfg" event={"ID":"c8cc55b1-e6ed-4790-886c-fabe5917bf27","Type":"ContainerStarted","Data":"7cfbf75bc1bea8190b4fd8a7b4f36c4f8056d3512bf0a0494d17fb32c82abce1"} Mar 12 13:44:23 crc kubenswrapper[4778]: I0312 13:44:23.461543 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbbw"] Mar 12 13:44:23 crc kubenswrapper[4778]: W0312 13:44:23.465055 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f9b6c4c_ccc9_42ac_9d52_64690d25a4d6.slice/crio-c74f7a8b7ab0e3f37ef2ae936c5f539034e20dd9a81db59b3381c8571e8dbe74 WatchSource:0}: Error finding container c74f7a8b7ab0e3f37ef2ae936c5f539034e20dd9a81db59b3381c8571e8dbe74: Status 404 returned error can't find the container with id c74f7a8b7ab0e3f37ef2ae936c5f539034e20dd9a81db59b3381c8571e8dbe74 Mar 12 13:44:24 crc kubenswrapper[4778]: I0312 13:44:24.206556 4778 generic.go:334] "Generic (PLEG): container finished" podID="76d361e0-0808-41e0-a659-d9977bce86de" containerID="276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690" exitCode=0 Mar 12 13:44:24 crc kubenswrapper[4778]: I0312 13:44:24.206784 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfmvx" event={"ID":"76d361e0-0808-41e0-a659-d9977bce86de","Type":"ContainerDied","Data":"276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690"} Mar 12 13:44:24 crc kubenswrapper[4778]: I0312 13:44:24.209862 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbbw" event={"ID":"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6","Type":"ContainerStarted","Data":"c74f7a8b7ab0e3f37ef2ae936c5f539034e20dd9a81db59b3381c8571e8dbe74"} Mar 12 13:44:24 crc kubenswrapper[4778]: I0312 13:44:24.215972 4778 generic.go:334] "Generic (PLEG): container finished" podID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerID="a44a31875240c27026c8d5b3562efaf0a4ac960ee6a568ff9dac9567138bfecd" exitCode=0 Mar 12 13:44:24 crc kubenswrapper[4778]: I0312 13:44:24.216390 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbbfg" event={"ID":"c8cc55b1-e6ed-4790-886c-fabe5917bf27","Type":"ContainerDied","Data":"a44a31875240c27026c8d5b3562efaf0a4ac960ee6a568ff9dac9567138bfecd"} Mar 12 13:44:25 crc kubenswrapper[4778]: I0312 13:44:25.228947 4778 generic.go:334] "Generic (PLEG): container finished" podID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerID="b80555cb1dcab02445567c3e276216686d7719c0ca0475be3608a7a9c0f26718" exitCode=0 Mar 12 13:44:25 crc kubenswrapper[4778]: I0312 13:44:25.229083 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbbw" event={"ID":"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6","Type":"ContainerDied","Data":"b80555cb1dcab02445567c3e276216686d7719c0ca0475be3608a7a9c0f26718"} Mar 12 13:44:26 crc kubenswrapper[4778]: I0312 13:44:26.239229 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbbw" event={"ID":"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6","Type":"ContainerStarted","Data":"e4ef6232820924fc9bfa9e22782695ff3623174a4e886e71c18dbcccc1dd4b88"} Mar 12 13:44:26 crc kubenswrapper[4778]: I0312 13:44:26.242054 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfmvx" event={"ID":"76d361e0-0808-41e0-a659-d9977bce86de","Type":"ContainerStarted","Data":"a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9"} Mar 12 13:44:26 crc kubenswrapper[4778]: I0312 13:44:26.275962 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cfmvx" podStartSLOduration=4.36772276 podStartE2EDuration="7.275939492s" podCreationTimestamp="2026-03-12 13:44:19 +0000 UTC" firstStartedPulling="2026-03-12 13:44:22.161218631 +0000 UTC m=+2080.609914027" lastFinishedPulling="2026-03-12 13:44:25.069435363 +0000 UTC m=+2083.518130759" observedRunningTime="2026-03-12 13:44:26.273952676 +0000 UTC m=+2084.722648092" watchObservedRunningTime="2026-03-12 13:44:26.275939492 +0000 UTC m=+2084.724634888" Mar 12 13:44:27 crc kubenswrapper[4778]: I0312 13:44:27.253502 4778 generic.go:334] "Generic (PLEG): container finished" podID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerID="e4ef6232820924fc9bfa9e22782695ff3623174a4e886e71c18dbcccc1dd4b88" exitCode=0 Mar 12 13:44:27 crc kubenswrapper[4778]: I0312 13:44:27.253564 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbbw" event={"ID":"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6","Type":"ContainerDied","Data":"e4ef6232820924fc9bfa9e22782695ff3623174a4e886e71c18dbcccc1dd4b88"} Mar 12 13:44:28 crc kubenswrapper[4778]: I0312 13:44:28.557952 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:44:28 crc kubenswrapper[4778]: I0312 13:44:28.558271 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:44:29 crc kubenswrapper[4778]: I0312 13:44:29.994367 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:29 crc kubenswrapper[4778]: I0312 13:44:29.994728 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:31 crc kubenswrapper[4778]: I0312 13:44:31.037053 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6b6mv"] Mar 12 13:44:31 crc kubenswrapper[4778]: I0312 13:44:31.047497 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6b6mv"] Mar 12 13:44:31 crc kubenswrapper[4778]: I0312 13:44:31.157633 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cfmvx" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="registry-server" probeResult="failure" output=< Mar 12 13:44:31 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:44:31 crc kubenswrapper[4778]: > Mar 12 13:44:31 crc kubenswrapper[4778]: I0312 13:44:31.296083 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbbw" event={"ID":"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6","Type":"ContainerStarted","Data":"53b4b05091c277f65d760aedbdd673f748699fdc3c86d1184ac2f57494d75369"} Mar 12 13:44:31 crc kubenswrapper[4778]: I0312 13:44:31.299557 4778 generic.go:334] "Generic (PLEG): container finished" podID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerID="bdf54c6d37ca16db7981b38aa8bdf481e8ce434ef1861261a6875f0a169c6607" exitCode=0 Mar 12 13:44:31 crc kubenswrapper[4778]: I0312 13:44:31.299652 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbbfg" event={"ID":"c8cc55b1-e6ed-4790-886c-fabe5917bf27","Type":"ContainerDied","Data":"bdf54c6d37ca16db7981b38aa8bdf481e8ce434ef1861261a6875f0a169c6607"} Mar 12 13:44:31 crc kubenswrapper[4778]: I0312 13:44:31.319713 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tsbbw" podStartSLOduration=4.233053285 podStartE2EDuration="9.319694152s" podCreationTimestamp="2026-03-12 13:44:22 +0000 UTC" firstStartedPulling="2026-03-12 13:44:25.234139002 +0000 UTC m=+2083.682834418" lastFinishedPulling="2026-03-12 13:44:30.320779889 +0000 UTC m=+2088.769475285" observedRunningTime="2026-03-12 13:44:31.317011326 +0000 UTC m=+2089.765706732" watchObservedRunningTime="2026-03-12 13:44:31.319694152 +0000 UTC m=+2089.768389548" Mar 12 13:44:32 crc kubenswrapper[4778]: I0312 13:44:32.266367 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe24691f-9019-44ec-85bf-b477c53f05ec" path="/var/lib/kubelet/pods/fe24691f-9019-44ec-85bf-b477c53f05ec/volumes" Mar 12 13:44:32 crc kubenswrapper[4778]: I0312 13:44:32.316569 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbbfg" event={"ID":"c8cc55b1-e6ed-4790-886c-fabe5917bf27","Type":"ContainerStarted","Data":"ed078967e33cb44c74a365a9804f9a8509ee01d3f7a8039f9f7b8f3366ab7aae"} Mar 12 13:44:32 crc kubenswrapper[4778]: I0312 13:44:32.338860 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zbbfg" podStartSLOduration=3.8613881770000003 podStartE2EDuration="10.338837998s" podCreationTimestamp="2026-03-12 13:44:22 +0000 UTC" firstStartedPulling="2026-03-12 13:44:25.231752284 +0000 UTC m=+2083.680447690" lastFinishedPulling="2026-03-12 13:44:31.709202115 +0000 UTC m=+2090.157897511" observedRunningTime="2026-03-12 13:44:32.337675555 +0000 UTC m=+2090.786370951" watchObservedRunningTime="2026-03-12 13:44:32.338837998 +0000 UTC m=+2090.787533394" Mar 12 13:44:32 crc kubenswrapper[4778]: I0312 13:44:32.376817 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:32 crc kubenswrapper[4778]: I0312 13:44:32.376882 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:33 crc kubenswrapper[4778]: I0312 13:44:33.002040 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:33 crc kubenswrapper[4778]: I0312 13:44:33.002290 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:33 crc kubenswrapper[4778]: I0312 13:44:33.425515 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zbbfg" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="registry-server" probeResult="failure" output=< Mar 12 13:44:33 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:44:33 crc kubenswrapper[4778]: > Mar 12 13:44:34 crc kubenswrapper[4778]: I0312 13:44:34.046864 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tsbbw" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="registry-server" probeResult="failure" output=< Mar 12 13:44:34 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:44:34 crc kubenswrapper[4778]: > Mar 12 13:44:41 crc kubenswrapper[4778]: I0312 13:44:41.067413 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cfmvx" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="registry-server" probeResult="failure" output=< Mar 12 13:44:41 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:44:41 crc kubenswrapper[4778]: > Mar 12 13:44:42 crc kubenswrapper[4778]: I0312 13:44:42.431132 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:42 crc kubenswrapper[4778]: I0312 13:44:42.480366 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zbbfg" Mar 12 13:44:42 crc kubenswrapper[4778]: I0312 13:44:42.574119 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zbbfg"] Mar 12 13:44:42 crc kubenswrapper[4778]: I0312 13:44:42.670238 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-scbxn"] Mar 12 13:44:42 crc kubenswrapper[4778]: I0312 13:44:42.670544 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-scbxn" podUID="f2f91915-3841-4662-88e4-82a22df0b131" containerName="registry-server" containerID="cri-o://b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76" gracePeriod=2 Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.216699 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.317933 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-catalog-content\") pod \"f2f91915-3841-4662-88e4-82a22df0b131\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.318074 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxbz\" (UniqueName: \"kubernetes.io/projected/f2f91915-3841-4662-88e4-82a22df0b131-kube-api-access-xpxbz\") pod \"f2f91915-3841-4662-88e4-82a22df0b131\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.318166 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-utilities\") pod \"f2f91915-3841-4662-88e4-82a22df0b131\" (UID: \"f2f91915-3841-4662-88e4-82a22df0b131\") " Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.319495 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-utilities" (OuterVolumeSpecName: "utilities") pod "f2f91915-3841-4662-88e4-82a22df0b131" (UID: "f2f91915-3841-4662-88e4-82a22df0b131"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.339441 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f91915-3841-4662-88e4-82a22df0b131-kube-api-access-xpxbz" (OuterVolumeSpecName: "kube-api-access-xpxbz") pod "f2f91915-3841-4662-88e4-82a22df0b131" (UID: "f2f91915-3841-4662-88e4-82a22df0b131"). InnerVolumeSpecName "kube-api-access-xpxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.424446 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpxbz\" (UniqueName: \"kubernetes.io/projected/f2f91915-3841-4662-88e4-82a22df0b131-kube-api-access-xpxbz\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.424489 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.431342 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2f91915-3841-4662-88e4-82a22df0b131" (UID: "f2f91915-3841-4662-88e4-82a22df0b131"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.475396 4778 generic.go:334] "Generic (PLEG): container finished" podID="f2f91915-3841-4662-88e4-82a22df0b131" containerID="b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76" exitCode=0 Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.476310 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scbxn" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.476550 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxn" event={"ID":"f2f91915-3841-4662-88e4-82a22df0b131","Type":"ContainerDied","Data":"b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76"} Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.476590 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxn" event={"ID":"f2f91915-3841-4662-88e4-82a22df0b131","Type":"ContainerDied","Data":"a19d957b7ce97a07ce6c0132cc4944c7bac635fad459f95aeb77803a9db2f905"} Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.476611 4778 scope.go:117] "RemoveContainer" containerID="b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.513313 4778 scope.go:117] "RemoveContainer" containerID="814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.528466 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2f91915-3841-4662-88e4-82a22df0b131-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.547794 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-scbxn"] Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.560857 4778 scope.go:117] "RemoveContainer" containerID="10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.567616 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-scbxn"] Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.628626 4778 scope.go:117] "RemoveContainer" containerID="b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76" Mar 12 13:44:43 crc kubenswrapper[4778]: E0312 13:44:43.629328 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76\": container with ID starting with b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76 not found: ID does not exist" containerID="b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.629372 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76"} err="failed to get container status \"b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76\": rpc error: code = NotFound desc = could not find container \"b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76\": container with ID starting with b5fb35881f73fed40f0d045e20b298d8e653b8e77b271e499bac524ab74b5a76 not found: ID does not exist" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.629401 4778 scope.go:117] "RemoveContainer" containerID="814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e" Mar 12 13:44:43 crc kubenswrapper[4778]: E0312 13:44:43.631416 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e\": container with ID starting with 814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e not found: ID does not exist" containerID="814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.631462 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e"} err="failed to get container status \"814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e\": rpc error: code = NotFound desc = could not find container \"814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e\": container with ID starting with 814289091d5196e8cb90a27db5c8b7b0001e258e89f3f752a355b1aba2fbc07e not found: ID does not exist" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.631495 4778 scope.go:117] "RemoveContainer" containerID="10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62" Mar 12 13:44:43 crc kubenswrapper[4778]: E0312 13:44:43.632342 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62\": container with ID starting with 10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62 not found: ID does not exist" containerID="10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62" Mar 12 13:44:43 crc kubenswrapper[4778]: I0312 13:44:43.632371 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62"} err="failed to get container status \"10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62\": rpc error: code = NotFound desc = could not find container \"10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62\": container with ID starting with 10d3561207dda57207dda1824471a89613d2f3cdc7e885ebdb851a3821e79c62 not found: ID does not exist" Mar 12 13:44:44 crc kubenswrapper[4778]: I0312 13:44:44.145761 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tsbbw" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="registry-server" probeResult="failure" output=< Mar 12 13:44:44 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:44:44 crc kubenswrapper[4778]: > Mar 12 13:44:44 crc kubenswrapper[4778]: I0312 13:44:44.266767 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f91915-3841-4662-88e4-82a22df0b131" path="/var/lib/kubelet/pods/f2f91915-3841-4662-88e4-82a22df0b131/volumes" Mar 12 13:44:50 crc kubenswrapper[4778]: I0312 13:44:50.046574 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:50 crc kubenswrapper[4778]: I0312 13:44:50.138794 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:50 crc kubenswrapper[4778]: I0312 13:44:50.859999 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfmvx"] Mar 12 13:44:51 crc kubenswrapper[4778]: I0312 13:44:51.543156 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cfmvx" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="registry-server" containerID="cri-o://a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9" gracePeriod=2 Mar 12 13:44:51 crc kubenswrapper[4778]: I0312 13:44:51.994760 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.110270 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-catalog-content\") pod \"76d361e0-0808-41e0-a659-d9977bce86de\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.110312 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-utilities\") pod \"76d361e0-0808-41e0-a659-d9977bce86de\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.110441 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8php4\" (UniqueName: \"kubernetes.io/projected/76d361e0-0808-41e0-a659-d9977bce86de-kube-api-access-8php4\") pod \"76d361e0-0808-41e0-a659-d9977bce86de\" (UID: \"76d361e0-0808-41e0-a659-d9977bce86de\") " Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.111297 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-utilities" (OuterVolumeSpecName: "utilities") pod "76d361e0-0808-41e0-a659-d9977bce86de" (UID: "76d361e0-0808-41e0-a659-d9977bce86de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.115693 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d361e0-0808-41e0-a659-d9977bce86de-kube-api-access-8php4" (OuterVolumeSpecName: "kube-api-access-8php4") pod "76d361e0-0808-41e0-a659-d9977bce86de" (UID: "76d361e0-0808-41e0-a659-d9977bce86de"). InnerVolumeSpecName "kube-api-access-8php4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.163849 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76d361e0-0808-41e0-a659-d9977bce86de" (UID: "76d361e0-0808-41e0-a659-d9977bce86de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.212311 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.212353 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d361e0-0808-41e0-a659-d9977bce86de-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.212368 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8php4\" (UniqueName: \"kubernetes.io/projected/76d361e0-0808-41e0-a659-d9977bce86de-kube-api-access-8php4\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.552974 4778 generic.go:334] "Generic (PLEG): container finished" podID="76d361e0-0808-41e0-a659-d9977bce86de" containerID="a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9" exitCode=0 Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.553048 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfmvx" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.553038 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfmvx" event={"ID":"76d361e0-0808-41e0-a659-d9977bce86de","Type":"ContainerDied","Data":"a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9"} Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.553389 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfmvx" event={"ID":"76d361e0-0808-41e0-a659-d9977bce86de","Type":"ContainerDied","Data":"af4ac1b2a11604b423a1cb99788d39c58084666f21c326e7b0bc198fdab855e6"} Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.553407 4778 scope.go:117] "RemoveContainer" containerID="a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.576170 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfmvx"] Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.580073 4778 scope.go:117] "RemoveContainer" containerID="276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.584825 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cfmvx"] Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.604941 4778 scope.go:117] "RemoveContainer" containerID="28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.646147 4778 scope.go:117] "RemoveContainer" containerID="a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9" Mar 12 13:44:52 crc kubenswrapper[4778]: E0312 13:44:52.646724 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9\": container with ID starting with a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9 not found: ID does not exist" containerID="a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.646770 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9"} err="failed to get container status \"a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9\": rpc error: code = NotFound desc = could not find container \"a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9\": container with ID starting with a4cf397b97c3a80100f006e2a53fcbacdbb24a4168b3b2d40704492acf9072a9 not found: ID does not exist" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.646798 4778 scope.go:117] "RemoveContainer" containerID="276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690" Mar 12 13:44:52 crc kubenswrapper[4778]: E0312 13:44:52.647206 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690\": container with ID starting with 276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690 not found: ID does not exist" containerID="276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.647241 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690"} err="failed to get container status \"276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690\": rpc error: code = NotFound desc = could not find container \"276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690\": container with ID starting with 276ec13767567c5795991c6f0969324f5d851bfcaf87a8fb83fbe5e9808be690 not found: ID does not exist" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.647259 4778 scope.go:117] "RemoveContainer" containerID="28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e" Mar 12 13:44:52 crc kubenswrapper[4778]: E0312 13:44:52.647563 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e\": container with ID starting with 28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e not found: ID does not exist" containerID="28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e" Mar 12 13:44:52 crc kubenswrapper[4778]: I0312 13:44:52.647591 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e"} err="failed to get container status \"28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e\": rpc error: code = NotFound desc = could not find container \"28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e\": container with ID starting with 28bd951139caec3680a15c4aded1c0bca6f1af5a51137c86d0e3f15a3053684e not found: ID does not exist" Mar 12 13:44:53 crc kubenswrapper[4778]: I0312 13:44:53.047613 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:53 crc kubenswrapper[4778]: I0312 13:44:53.093561 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:54 crc kubenswrapper[4778]: I0312 13:44:54.062396 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qqx6r"] Mar 12 13:44:54 crc kubenswrapper[4778]: I0312 13:44:54.079207 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qqx6r"] Mar 12 13:44:54 crc kubenswrapper[4778]: I0312 13:44:54.263787 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d361e0-0808-41e0-a659-d9977bce86de" path="/var/lib/kubelet/pods/76d361e0-0808-41e0-a659-d9977bce86de/volumes" Mar 12 13:44:54 crc kubenswrapper[4778]: I0312 13:44:54.264485 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a74774-1415-43d1-b278-bead87ab4385" path="/var/lib/kubelet/pods/98a74774-1415-43d1-b278-bead87ab4385/volumes" Mar 12 13:44:55 crc kubenswrapper[4778]: I0312 13:44:55.027388 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7dlt6"] Mar 12 13:44:55 crc kubenswrapper[4778]: I0312 13:44:55.037374 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7dlt6"] Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.272484 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58dfb2fb-928e-46de-90dd-481c91a7727c" path="/var/lib/kubelet/pods/58dfb2fb-928e-46de-90dd-481c91a7727c/volumes" Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.279983 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbbw"] Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.280286 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tsbbw" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="registry-server" containerID="cri-o://53b4b05091c277f65d760aedbdd673f748699fdc3c86d1184ac2f57494d75369" gracePeriod=2 Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.596300 4778 generic.go:334] "Generic (PLEG): container finished" podID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerID="53b4b05091c277f65d760aedbdd673f748699fdc3c86d1184ac2f57494d75369" exitCode=0 Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.596349 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbbw" event={"ID":"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6","Type":"ContainerDied","Data":"53b4b05091c277f65d760aedbdd673f748699fdc3c86d1184ac2f57494d75369"} Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.750273 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.800064 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-utilities\") pod \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.800280 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wnw4\" (UniqueName: \"kubernetes.io/projected/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-kube-api-access-2wnw4\") pod \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.800392 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-catalog-content\") pod \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\" (UID: \"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6\") " Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.801074 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-utilities" (OuterVolumeSpecName: "utilities") pod "0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" (UID: "0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.819394 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-kube-api-access-2wnw4" (OuterVolumeSpecName: "kube-api-access-2wnw4") pod "0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" (UID: "0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6"). InnerVolumeSpecName "kube-api-access-2wnw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.829109 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" (UID: "0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.902991 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.903258 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wnw4\" (UniqueName: \"kubernetes.io/projected/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-kube-api-access-2wnw4\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:56 crc kubenswrapper[4778]: I0312 13:44:56.903392 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:44:57 crc kubenswrapper[4778]: I0312 13:44:57.606143 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tsbbw" event={"ID":"0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6","Type":"ContainerDied","Data":"c74f7a8b7ab0e3f37ef2ae936c5f539034e20dd9a81db59b3381c8571e8dbe74"} Mar 12 13:44:57 crc kubenswrapper[4778]: I0312 13:44:57.606468 4778 scope.go:117] "RemoveContainer" containerID="53b4b05091c277f65d760aedbdd673f748699fdc3c86d1184ac2f57494d75369" Mar 12 13:44:57 crc kubenswrapper[4778]: I0312 13:44:57.606242 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tsbbw" Mar 12 13:44:57 crc kubenswrapper[4778]: I0312 13:44:57.629506 4778 scope.go:117] "RemoveContainer" containerID="e4ef6232820924fc9bfa9e22782695ff3623174a4e886e71c18dbcccc1dd4b88" Mar 12 13:44:57 crc kubenswrapper[4778]: I0312 13:44:57.641558 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbbw"] Mar 12 13:44:57 crc kubenswrapper[4778]: I0312 13:44:57.652676 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tsbbw"] Mar 12 13:44:57 crc kubenswrapper[4778]: I0312 13:44:57.676414 4778 scope.go:117] "RemoveContainer" containerID="b80555cb1dcab02445567c3e276216686d7719c0ca0475be3608a7a9c0f26718" Mar 12 13:44:58 crc kubenswrapper[4778]: I0312 13:44:58.269662 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" path="/var/lib/kubelet/pods/0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6/volumes" Mar 12 13:44:58 crc kubenswrapper[4778]: I0312 13:44:58.557798 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:44:58 crc kubenswrapper[4778]: I0312 13:44:58.558255 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:44:58 crc kubenswrapper[4778]: I0312 13:44:58.558399 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:44:58 crc kubenswrapper[4778]: I0312 13:44:58.559401 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92d3dad2e98d7139cb748a76fe93295a7064a4a757626bc932a272018a133968"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:44:58 crc kubenswrapper[4778]: I0312 13:44:58.559584 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://92d3dad2e98d7139cb748a76fe93295a7064a4a757626bc932a272018a133968" gracePeriod=600 Mar 12 13:44:59 crc kubenswrapper[4778]: I0312 13:44:59.628337 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="92d3dad2e98d7139cb748a76fe93295a7064a4a757626bc932a272018a133968" exitCode=0 Mar 12 13:44:59 crc kubenswrapper[4778]: I0312 13:44:59.628433 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"92d3dad2e98d7139cb748a76fe93295a7064a4a757626bc932a272018a133968"} Mar 12 13:44:59 crc kubenswrapper[4778]: I0312 13:44:59.628855 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f"} Mar 12 13:44:59 crc kubenswrapper[4778]: I0312 13:44:59.628880 4778 scope.go:117] "RemoveContainer" containerID="fbdf0765f9c2ff5952a8a2a2b43d61ef771ac404cabeb86051f9ffe5a9fd882e" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.150378 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh"] Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151138 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="extract-content" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151153 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="extract-content" Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151166 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="extract-content" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151173 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="extract-content" Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151203 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151211 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151223 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151228 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151247 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f91915-3841-4662-88e4-82a22df0b131" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151253 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f91915-3841-4662-88e4-82a22df0b131" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151263 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f91915-3841-4662-88e4-82a22df0b131" containerName="extract-utilities" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151269 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f91915-3841-4662-88e4-82a22df0b131" containerName="extract-utilities" Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151278 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="extract-utilities" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151292 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="extract-utilities" Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151301 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f91915-3841-4662-88e4-82a22df0b131" containerName="extract-content" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151308 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f91915-3841-4662-88e4-82a22df0b131" containerName="extract-content" Mar 12 13:45:00 crc kubenswrapper[4778]: E0312 13:45:00.151326 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="extract-utilities" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151332 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="extract-utilities" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151505 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d361e0-0808-41e0-a659-d9977bce86de" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151520 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f91915-3841-4662-88e4-82a22df0b131" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.151533 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9b6c4c-ccc9-42ac-9d52-64690d25a4d6" containerName="registry-server" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.152142 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.155328 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.159310 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.165500 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh"] Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.168812 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76005d52-2d02-4a1e-89dd-c050a66fe667-secret-volume\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.168886 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdwc\" (UniqueName: \"kubernetes.io/projected/76005d52-2d02-4a1e-89dd-c050a66fe667-kube-api-access-rqdwc\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.168937 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76005d52-2d02-4a1e-89dd-c050a66fe667-config-volume\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.271095 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76005d52-2d02-4a1e-89dd-c050a66fe667-secret-volume\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.271241 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqdwc\" (UniqueName: \"kubernetes.io/projected/76005d52-2d02-4a1e-89dd-c050a66fe667-kube-api-access-rqdwc\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.271324 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76005d52-2d02-4a1e-89dd-c050a66fe667-config-volume\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.272237 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76005d52-2d02-4a1e-89dd-c050a66fe667-config-volume\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.281863 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76005d52-2d02-4a1e-89dd-c050a66fe667-secret-volume\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.293960 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqdwc\" (UniqueName: \"kubernetes.io/projected/76005d52-2d02-4a1e-89dd-c050a66fe667-kube-api-access-rqdwc\") pod \"collect-profiles-29555385-qwzwh\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:00 crc kubenswrapper[4778]: I0312 13:45:00.473472 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:01 crc kubenswrapper[4778]: I0312 13:45:01.074843 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh"] Mar 12 13:45:01 crc kubenswrapper[4778]: E0312 13:45:01.531587 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76005d52_2d02_4a1e_89dd_c050a66fe667.slice/crio-conmon-501e76905e9d2ff1f1e87040184d63ca0f219b530ef232d95f1fa4250e5ab145.scope\": RecentStats: unable to find data in memory cache]" Mar 12 13:45:01 crc kubenswrapper[4778]: I0312 13:45:01.654841 4778 generic.go:334] "Generic (PLEG): container finished" podID="76005d52-2d02-4a1e-89dd-c050a66fe667" containerID="501e76905e9d2ff1f1e87040184d63ca0f219b530ef232d95f1fa4250e5ab145" exitCode=0 Mar 12 13:45:01 crc kubenswrapper[4778]: I0312 13:45:01.654910 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" event={"ID":"76005d52-2d02-4a1e-89dd-c050a66fe667","Type":"ContainerDied","Data":"501e76905e9d2ff1f1e87040184d63ca0f219b530ef232d95f1fa4250e5ab145"} Mar 12 13:45:01 crc kubenswrapper[4778]: I0312 13:45:01.654941 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" event={"ID":"76005d52-2d02-4a1e-89dd-c050a66fe667","Type":"ContainerStarted","Data":"924e154bac07decf2e0d0ef1ee072b8e965f02c42cadb7e7817f33bcd904899a"} Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.018614 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.130745 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76005d52-2d02-4a1e-89dd-c050a66fe667-config-volume\") pod \"76005d52-2d02-4a1e-89dd-c050a66fe667\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.130872 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76005d52-2d02-4a1e-89dd-c050a66fe667-secret-volume\") pod \"76005d52-2d02-4a1e-89dd-c050a66fe667\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.131001 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqdwc\" (UniqueName: \"kubernetes.io/projected/76005d52-2d02-4a1e-89dd-c050a66fe667-kube-api-access-rqdwc\") pod \"76005d52-2d02-4a1e-89dd-c050a66fe667\" (UID: \"76005d52-2d02-4a1e-89dd-c050a66fe667\") " Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.131754 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76005d52-2d02-4a1e-89dd-c050a66fe667-config-volume" (OuterVolumeSpecName: "config-volume") pod "76005d52-2d02-4a1e-89dd-c050a66fe667" (UID: "76005d52-2d02-4a1e-89dd-c050a66fe667"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.132304 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76005d52-2d02-4a1e-89dd-c050a66fe667-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.139042 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76005d52-2d02-4a1e-89dd-c050a66fe667-kube-api-access-rqdwc" (OuterVolumeSpecName: "kube-api-access-rqdwc") pod "76005d52-2d02-4a1e-89dd-c050a66fe667" (UID: "76005d52-2d02-4a1e-89dd-c050a66fe667"). InnerVolumeSpecName "kube-api-access-rqdwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.139237 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76005d52-2d02-4a1e-89dd-c050a66fe667-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76005d52-2d02-4a1e-89dd-c050a66fe667" (UID: "76005d52-2d02-4a1e-89dd-c050a66fe667"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.234651 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76005d52-2d02-4a1e-89dd-c050a66fe667-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.234703 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqdwc\" (UniqueName: \"kubernetes.io/projected/76005d52-2d02-4a1e-89dd-c050a66fe667-kube-api-access-rqdwc\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.675898 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.675828 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh" event={"ID":"76005d52-2d02-4a1e-89dd-c050a66fe667","Type":"ContainerDied","Data":"924e154bac07decf2e0d0ef1ee072b8e965f02c42cadb7e7817f33bcd904899a"} Mar 12 13:45:03 crc kubenswrapper[4778]: I0312 13:45:03.676341 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="924e154bac07decf2e0d0ef1ee072b8e965f02c42cadb7e7817f33bcd904899a" Mar 12 13:45:04 crc kubenswrapper[4778]: I0312 13:45:04.091309 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm"] Mar 12 13:45:04 crc kubenswrapper[4778]: I0312 13:45:04.099415 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555340-7tvjm"] Mar 12 13:45:04 crc kubenswrapper[4778]: I0312 13:45:04.267135 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a240fd7b-5854-4548-a847-e5590111964b" path="/var/lib/kubelet/pods/a240fd7b-5854-4548-a847-e5590111964b/volumes" Mar 12 13:45:18 crc kubenswrapper[4778]: I0312 13:45:18.653693 4778 scope.go:117] "RemoveContainer" containerID="638395848d77320f6f4d74ca6334a62beda4c18b92408c089881a124597a1418" Mar 12 13:45:18 crc kubenswrapper[4778]: I0312 13:45:18.721630 4778 scope.go:117] "RemoveContainer" containerID="a7c208f5185dc692f0ec8df98f6bb0b7b464e0a056d454057e864768b033e299" Mar 12 13:45:18 crc kubenswrapper[4778]: I0312 13:45:18.787430 4778 scope.go:117] "RemoveContainer" containerID="e3b15e2b52f4e1dd648d1cbcdd4c757ead8e48ae1ed5c998744e64dfa8993e67" Mar 12 13:45:18 crc kubenswrapper[4778]: I0312 13:45:18.874659 4778 scope.go:117] "RemoveContainer" containerID="6e9a4135f2199a3918c9a565e1055b2ed771be6904f7c3aed074108524b55a58" Mar 12 13:45:30 crc kubenswrapper[4778]: I0312 13:45:30.935212 4778 generic.go:334] "Generic (PLEG): container finished" podID="5c5541f3-fb44-476b-91c2-b07dffe50894" containerID="f8a3ca31df85cf9da8faf63b19eca49715e39783019bbe431b8fd2f9f7f72be3" exitCode=0 Mar 12 13:45:30 crc kubenswrapper[4778]: I0312 13:45:30.935549 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" event={"ID":"5c5541f3-fb44-476b-91c2-b07dffe50894","Type":"ContainerDied","Data":"f8a3ca31df85cf9da8faf63b19eca49715e39783019bbe431b8fd2f9f7f72be3"} Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.308602 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.418968 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-inventory\") pod \"5c5541f3-fb44-476b-91c2-b07dffe50894\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.419095 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-ssh-key-openstack-edpm-ipam\") pod \"5c5541f3-fb44-476b-91c2-b07dffe50894\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.419243 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lntxg\" (UniqueName: \"kubernetes.io/projected/5c5541f3-fb44-476b-91c2-b07dffe50894-kube-api-access-lntxg\") pod \"5c5541f3-fb44-476b-91c2-b07dffe50894\" (UID: \"5c5541f3-fb44-476b-91c2-b07dffe50894\") " Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.429381 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5541f3-fb44-476b-91c2-b07dffe50894-kube-api-access-lntxg" (OuterVolumeSpecName: "kube-api-access-lntxg") pod "5c5541f3-fb44-476b-91c2-b07dffe50894" (UID: "5c5541f3-fb44-476b-91c2-b07dffe50894"). InnerVolumeSpecName "kube-api-access-lntxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.447502 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-inventory" (OuterVolumeSpecName: "inventory") pod "5c5541f3-fb44-476b-91c2-b07dffe50894" (UID: "5c5541f3-fb44-476b-91c2-b07dffe50894"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.448459 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c5541f3-fb44-476b-91c2-b07dffe50894" (UID: "5c5541f3-fb44-476b-91c2-b07dffe50894"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.521549 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.521590 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c5541f3-fb44-476b-91c2-b07dffe50894-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.521603 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lntxg\" (UniqueName: \"kubernetes.io/projected/5c5541f3-fb44-476b-91c2-b07dffe50894-kube-api-access-lntxg\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.956413 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" event={"ID":"5c5541f3-fb44-476b-91c2-b07dffe50894","Type":"ContainerDied","Data":"2a03fc2efe0354ad242b25364cbd2eba2b14ebdbcdcea201fdb32f5a13b5d430"} Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.956468 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a03fc2efe0354ad242b25364cbd2eba2b14ebdbcdcea201fdb32f5a13b5d430" Mar 12 13:45:32 crc kubenswrapper[4778]: I0312 13:45:32.956521 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4szjl" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.073722 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr"] Mar 12 13:45:33 crc kubenswrapper[4778]: E0312 13:45:33.074308 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5541f3-fb44-476b-91c2-b07dffe50894" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.074332 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5541f3-fb44-476b-91c2-b07dffe50894" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 13:45:33 crc kubenswrapper[4778]: E0312 13:45:33.074350 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76005d52-2d02-4a1e-89dd-c050a66fe667" containerName="collect-profiles" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.074359 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="76005d52-2d02-4a1e-89dd-c050a66fe667" containerName="collect-profiles" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.074582 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5541f3-fb44-476b-91c2-b07dffe50894" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.074603 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="76005d52-2d02-4a1e-89dd-c050a66fe667" containerName="collect-profiles" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.075385 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.084742 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr"] Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.088549 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.088645 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.088855 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.088557 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.132015 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.132362 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/41583476-38cd-4c0d-a05a-96ddc5b330ca-kube-api-access-gr45c\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.132571 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.233941 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.234017 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/41583476-38cd-4c0d-a05a-96ddc5b330ca-kube-api-access-gr45c\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.234082 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.240235 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.241852 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.255974 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/41583476-38cd-4c0d-a05a-96ddc5b330ca-kube-api-access-gr45c\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9glvr\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.391555 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.897112 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr"] Mar 12 13:45:33 crc kubenswrapper[4778]: I0312 13:45:33.964240 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" event={"ID":"41583476-38cd-4c0d-a05a-96ddc5b330ca","Type":"ContainerStarted","Data":"359850f324feb41d74844acf8dd8291a291310bc46c176a33198262535f0d3da"} Mar 12 13:45:35 crc kubenswrapper[4778]: I0312 13:45:35.981855 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" event={"ID":"41583476-38cd-4c0d-a05a-96ddc5b330ca","Type":"ContainerStarted","Data":"d3aa4c129131d7684c185c26239b44c165d321201a6c5511e96be838b7de91e5"} Mar 12 13:45:36 crc kubenswrapper[4778]: I0312 13:45:36.008298 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" podStartSLOduration=1.287300717 podStartE2EDuration="3.008278015s" podCreationTimestamp="2026-03-12 13:45:33 +0000 UTC" firstStartedPulling="2026-03-12 13:45:33.903453796 +0000 UTC m=+2152.352149202" lastFinishedPulling="2026-03-12 13:45:35.624431104 +0000 UTC m=+2154.073126500" observedRunningTime="2026-03-12 13:45:35.996522931 +0000 UTC m=+2154.445218347" watchObservedRunningTime="2026-03-12 13:45:36.008278015 +0000 UTC m=+2154.456973411" Mar 12 13:45:39 crc kubenswrapper[4778]: I0312 13:45:39.059284 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-9xw6b"] Mar 12 13:45:39 crc kubenswrapper[4778]: I0312 13:45:39.070495 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-9xw6b"] Mar 12 13:45:40 crc kubenswrapper[4778]: I0312 13:45:40.266442 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3" path="/var/lib/kubelet/pods/eaa0985c-3171-40c5-8e5c-ab82a9fa6fc3/volumes" Mar 12 13:45:45 crc kubenswrapper[4778]: I0312 13:45:45.456460 4778 generic.go:334] "Generic (PLEG): container finished" podID="41583476-38cd-4c0d-a05a-96ddc5b330ca" containerID="d3aa4c129131d7684c185c26239b44c165d321201a6c5511e96be838b7de91e5" exitCode=0 Mar 12 13:45:45 crc kubenswrapper[4778]: I0312 13:45:45.456594 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" event={"ID":"41583476-38cd-4c0d-a05a-96ddc5b330ca","Type":"ContainerDied","Data":"d3aa4c129131d7684c185c26239b44c165d321201a6c5511e96be838b7de91e5"} Mar 12 13:45:46 crc kubenswrapper[4778]: I0312 13:45:46.903209 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:46 crc kubenswrapper[4778]: I0312 13:45:46.991212 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-ssh-key-openstack-edpm-ipam\") pod \"41583476-38cd-4c0d-a05a-96ddc5b330ca\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " Mar 12 13:45:46 crc kubenswrapper[4778]: I0312 13:45:46.991781 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-inventory\") pod \"41583476-38cd-4c0d-a05a-96ddc5b330ca\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " Mar 12 13:45:46 crc kubenswrapper[4778]: I0312 13:45:46.991905 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/41583476-38cd-4c0d-a05a-96ddc5b330ca-kube-api-access-gr45c\") pod \"41583476-38cd-4c0d-a05a-96ddc5b330ca\" (UID: \"41583476-38cd-4c0d-a05a-96ddc5b330ca\") " Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:46.999872 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41583476-38cd-4c0d-a05a-96ddc5b330ca-kube-api-access-gr45c" (OuterVolumeSpecName: "kube-api-access-gr45c") pod "41583476-38cd-4c0d-a05a-96ddc5b330ca" (UID: "41583476-38cd-4c0d-a05a-96ddc5b330ca"). InnerVolumeSpecName "kube-api-access-gr45c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.029814 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-inventory" (OuterVolumeSpecName: "inventory") pod "41583476-38cd-4c0d-a05a-96ddc5b330ca" (UID: "41583476-38cd-4c0d-a05a-96ddc5b330ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.033378 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "41583476-38cd-4c0d-a05a-96ddc5b330ca" (UID: "41583476-38cd-4c0d-a05a-96ddc5b330ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.096433 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.096553 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41583476-38cd-4c0d-a05a-96ddc5b330ca-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.096571 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr45c\" (UniqueName: \"kubernetes.io/projected/41583476-38cd-4c0d-a05a-96ddc5b330ca-kube-api-access-gr45c\") on node \"crc\" DevicePath \"\"" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.475617 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" event={"ID":"41583476-38cd-4c0d-a05a-96ddc5b330ca","Type":"ContainerDied","Data":"359850f324feb41d74844acf8dd8291a291310bc46c176a33198262535f0d3da"} Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.475669 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359850f324feb41d74844acf8dd8291a291310bc46c176a33198262535f0d3da" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.475732 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9glvr" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.573049 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n"] Mar 12 13:45:47 crc kubenswrapper[4778]: E0312 13:45:47.574156 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41583476-38cd-4c0d-a05a-96ddc5b330ca" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.574317 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="41583476-38cd-4c0d-a05a-96ddc5b330ca" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.574626 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="41583476-38cd-4c0d-a05a-96ddc5b330ca" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.575563 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.578441 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.578834 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.580363 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.580798 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.587392 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n"] Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.707493 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.707604 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.707837 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zh8c\" (UniqueName: \"kubernetes.io/projected/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-kube-api-access-5zh8c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.809632 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zh8c\" (UniqueName: \"kubernetes.io/projected/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-kube-api-access-5zh8c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.809732 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.809778 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.813555 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.814168 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.830642 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zh8c\" (UniqueName: \"kubernetes.io/projected/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-kube-api-access-5zh8c\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-g252n\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:47 crc kubenswrapper[4778]: I0312 13:45:47.907154 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:45:48 crc kubenswrapper[4778]: I0312 13:45:48.474436 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n"] Mar 12 13:45:48 crc kubenswrapper[4778]: W0312 13:45:48.481210 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29f8609b_4a3b_42ba_9450_a2b633bb4c2c.slice/crio-73a22b7b256ee030de16c8af304a9e966b8c86513de32e0f1f1efbf8d69bce4e WatchSource:0}: Error finding container 73a22b7b256ee030de16c8af304a9e966b8c86513de32e0f1f1efbf8d69bce4e: Status 404 returned error can't find the container with id 73a22b7b256ee030de16c8af304a9e966b8c86513de32e0f1f1efbf8d69bce4e Mar 12 13:45:49 crc kubenswrapper[4778]: I0312 13:45:49.491766 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" event={"ID":"29f8609b-4a3b-42ba-9450-a2b633bb4c2c","Type":"ContainerStarted","Data":"6d041d9afe88fdec32af7da1355c78c05ba3b4da3265df6c00f4f7416ed27298"} Mar 12 13:45:49 crc kubenswrapper[4778]: I0312 13:45:49.492341 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" event={"ID":"29f8609b-4a3b-42ba-9450-a2b633bb4c2c","Type":"ContainerStarted","Data":"73a22b7b256ee030de16c8af304a9e966b8c86513de32e0f1f1efbf8d69bce4e"} Mar 12 13:45:50 crc kubenswrapper[4778]: I0312 13:45:50.524987 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" podStartSLOduration=2.928433387 podStartE2EDuration="3.524969979s" podCreationTimestamp="2026-03-12 13:45:47 +0000 UTC" firstStartedPulling="2026-03-12 13:45:48.483989564 +0000 UTC m=+2166.932684960" lastFinishedPulling="2026-03-12 13:45:49.080526156 +0000 UTC m=+2167.529221552" observedRunningTime="2026-03-12 13:45:50.51584531 +0000 UTC m=+2168.964540696" watchObservedRunningTime="2026-03-12 13:45:50.524969979 +0000 UTC m=+2168.973665375" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.146135 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555386-vjswk"] Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.149410 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555386-vjswk" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.153959 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.154284 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.155340 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.155980 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555386-vjswk"] Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.245759 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8gz7\" (UniqueName: \"kubernetes.io/projected/f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6-kube-api-access-c8gz7\") pod \"auto-csr-approver-29555386-vjswk\" (UID: \"f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6\") " pod="openshift-infra/auto-csr-approver-29555386-vjswk" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.349676 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8gz7\" (UniqueName: \"kubernetes.io/projected/f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6-kube-api-access-c8gz7\") pod \"auto-csr-approver-29555386-vjswk\" (UID: \"f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6\") " pod="openshift-infra/auto-csr-approver-29555386-vjswk" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.371021 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8gz7\" (UniqueName: \"kubernetes.io/projected/f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6-kube-api-access-c8gz7\") pod \"auto-csr-approver-29555386-vjswk\" (UID: \"f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6\") " pod="openshift-infra/auto-csr-approver-29555386-vjswk" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.487096 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555386-vjswk" Mar 12 13:46:00 crc kubenswrapper[4778]: I0312 13:46:00.937861 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555386-vjswk"] Mar 12 13:46:00 crc kubenswrapper[4778]: W0312 13:46:00.944009 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf55c85e9_4cb7_4ac4_bc3d_c37217b4abf6.slice/crio-615af91b8c037a610414459977df85a0b58a194b765f6201a453a8ec1331a26e WatchSource:0}: Error finding container 615af91b8c037a610414459977df85a0b58a194b765f6201a453a8ec1331a26e: Status 404 returned error can't find the container with id 615af91b8c037a610414459977df85a0b58a194b765f6201a453a8ec1331a26e Mar 12 13:46:01 crc kubenswrapper[4778]: I0312 13:46:01.591845 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555386-vjswk" event={"ID":"f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6","Type":"ContainerStarted","Data":"615af91b8c037a610414459977df85a0b58a194b765f6201a453a8ec1331a26e"} Mar 12 13:46:04 crc kubenswrapper[4778]: I0312 13:46:04.935978 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l8q2d"] Mar 12 13:46:04 crc kubenswrapper[4778]: I0312 13:46:04.939061 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:04 crc kubenswrapper[4778]: I0312 13:46:04.946353 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8q2d"] Mar 12 13:46:04 crc kubenswrapper[4778]: I0312 13:46:04.983278 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-utilities\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:04 crc kubenswrapper[4778]: I0312 13:46:04.983328 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-catalog-content\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:04 crc kubenswrapper[4778]: I0312 13:46:04.983617 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfslw\" (UniqueName: \"kubernetes.io/projected/af355b7f-362d-4f00-96fb-07f77590df88-kube-api-access-gfslw\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.021506 4778 generic.go:334] "Generic (PLEG): container finished" podID="f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6" containerID="2e424e585231dad361491fa12a9a787f83d6973879b6b45159764198bbcf5877" exitCode=0 Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.021564 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555386-vjswk" event={"ID":"f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6","Type":"ContainerDied","Data":"2e424e585231dad361491fa12a9a787f83d6973879b6b45159764198bbcf5877"} Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.085693 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-utilities\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.085779 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-catalog-content\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.085856 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfslw\" (UniqueName: \"kubernetes.io/projected/af355b7f-362d-4f00-96fb-07f77590df88-kube-api-access-gfslw\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.086430 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-utilities\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.086722 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-catalog-content\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.111700 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfslw\" (UniqueName: \"kubernetes.io/projected/af355b7f-362d-4f00-96fb-07f77590df88-kube-api-access-gfslw\") pod \"redhat-operators-l8q2d\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.255038 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:05 crc kubenswrapper[4778]: I0312 13:46:05.718033 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8q2d"] Mar 12 13:46:06 crc kubenswrapper[4778]: I0312 13:46:06.030563 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8q2d" event={"ID":"af355b7f-362d-4f00-96fb-07f77590df88","Type":"ContainerStarted","Data":"d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f"} Mar 12 13:46:06 crc kubenswrapper[4778]: I0312 13:46:06.030611 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8q2d" event={"ID":"af355b7f-362d-4f00-96fb-07f77590df88","Type":"ContainerStarted","Data":"32b7530aa6acd71d36db41ee270d32aa3aa01b7f38534e4287b54333a34c61e6"} Mar 12 13:46:06 crc kubenswrapper[4778]: I0312 13:46:06.428240 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555386-vjswk" Mar 12 13:46:06 crc kubenswrapper[4778]: I0312 13:46:06.616258 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8gz7\" (UniqueName: \"kubernetes.io/projected/f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6-kube-api-access-c8gz7\") pod \"f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6\" (UID: \"f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6\") " Mar 12 13:46:06 crc kubenswrapper[4778]: I0312 13:46:06.624264 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6-kube-api-access-c8gz7" (OuterVolumeSpecName: "kube-api-access-c8gz7") pod "f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6" (UID: "f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6"). InnerVolumeSpecName "kube-api-access-c8gz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:46:06 crc kubenswrapper[4778]: I0312 13:46:06.719518 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8gz7\" (UniqueName: \"kubernetes.io/projected/f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6-kube-api-access-c8gz7\") on node \"crc\" DevicePath \"\"" Mar 12 13:46:07 crc kubenswrapper[4778]: I0312 13:46:07.042062 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555386-vjswk" Mar 12 13:46:07 crc kubenswrapper[4778]: I0312 13:46:07.042056 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555386-vjswk" event={"ID":"f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6","Type":"ContainerDied","Data":"615af91b8c037a610414459977df85a0b58a194b765f6201a453a8ec1331a26e"} Mar 12 13:46:07 crc kubenswrapper[4778]: I0312 13:46:07.042235 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="615af91b8c037a610414459977df85a0b58a194b765f6201a453a8ec1331a26e" Mar 12 13:46:07 crc kubenswrapper[4778]: I0312 13:46:07.059979 4778 generic.go:334] "Generic (PLEG): container finished" podID="af355b7f-362d-4f00-96fb-07f77590df88" containerID="d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f" exitCode=0 Mar 12 13:46:07 crc kubenswrapper[4778]: I0312 13:46:07.060043 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8q2d" event={"ID":"af355b7f-362d-4f00-96fb-07f77590df88","Type":"ContainerDied","Data":"d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f"} Mar 12 13:46:07 crc kubenswrapper[4778]: I0312 13:46:07.515801 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555380-n8mtp"] Mar 12 13:46:07 crc kubenswrapper[4778]: I0312 13:46:07.525952 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555380-n8mtp"] Mar 12 13:46:08 crc kubenswrapper[4778]: I0312 13:46:08.265803 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f54cc7-08e2-42c1-883d-316f1dac7621" path="/var/lib/kubelet/pods/69f54cc7-08e2-42c1-883d-316f1dac7621/volumes" Mar 12 13:46:09 crc kubenswrapper[4778]: I0312 13:46:09.096564 4778 generic.go:334] "Generic (PLEG): container finished" podID="af355b7f-362d-4f00-96fb-07f77590df88" containerID="d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1" exitCode=0 Mar 12 13:46:09 crc kubenswrapper[4778]: I0312 13:46:09.096621 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8q2d" event={"ID":"af355b7f-362d-4f00-96fb-07f77590df88","Type":"ContainerDied","Data":"d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1"} Mar 12 13:46:11 crc kubenswrapper[4778]: I0312 13:46:11.121264 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8q2d" event={"ID":"af355b7f-362d-4f00-96fb-07f77590df88","Type":"ContainerStarted","Data":"7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6"} Mar 12 13:46:11 crc kubenswrapper[4778]: I0312 13:46:11.142309 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l8q2d" podStartSLOduration=4.071254563 podStartE2EDuration="7.142292192s" podCreationTimestamp="2026-03-12 13:46:04 +0000 UTC" firstStartedPulling="2026-03-12 13:46:07.06236947 +0000 UTC m=+2185.511064886" lastFinishedPulling="2026-03-12 13:46:10.133407119 +0000 UTC m=+2188.582102515" observedRunningTime="2026-03-12 13:46:11.141674794 +0000 UTC m=+2189.590370200" watchObservedRunningTime="2026-03-12 13:46:11.142292192 +0000 UTC m=+2189.590987588" Mar 12 13:46:15 crc kubenswrapper[4778]: I0312 13:46:15.255966 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:15 crc kubenswrapper[4778]: I0312 13:46:15.256677 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:16 crc kubenswrapper[4778]: I0312 13:46:16.306113 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l8q2d" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="registry-server" probeResult="failure" output=< Mar 12 13:46:16 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:46:16 crc kubenswrapper[4778]: > Mar 12 13:46:19 crc kubenswrapper[4778]: I0312 13:46:19.020594 4778 scope.go:117] "RemoveContainer" containerID="a3547232ddc46df5ded5cc24fff2ec3e7c8bb4fb4c52277d66e27c319ec41995" Mar 12 13:46:19 crc kubenswrapper[4778]: I0312 13:46:19.068513 4778 scope.go:117] "RemoveContainer" containerID="625dea5df6820f4416903072a858eb0ac8d225248a71973001f9856768eaad43" Mar 12 13:46:25 crc kubenswrapper[4778]: I0312 13:46:25.309924 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:25 crc kubenswrapper[4778]: I0312 13:46:25.367215 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:25 crc kubenswrapper[4778]: I0312 13:46:25.555069 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8q2d"] Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.250358 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l8q2d" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="registry-server" containerID="cri-o://7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6" gracePeriod=2 Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.770054 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.856958 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfslw\" (UniqueName: \"kubernetes.io/projected/af355b7f-362d-4f00-96fb-07f77590df88-kube-api-access-gfslw\") pod \"af355b7f-362d-4f00-96fb-07f77590df88\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.857143 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-catalog-content\") pod \"af355b7f-362d-4f00-96fb-07f77590df88\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.857258 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-utilities\") pod \"af355b7f-362d-4f00-96fb-07f77590df88\" (UID: \"af355b7f-362d-4f00-96fb-07f77590df88\") " Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.858329 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-utilities" (OuterVolumeSpecName: "utilities") pod "af355b7f-362d-4f00-96fb-07f77590df88" (UID: "af355b7f-362d-4f00-96fb-07f77590df88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.873948 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af355b7f-362d-4f00-96fb-07f77590df88-kube-api-access-gfslw" (OuterVolumeSpecName: "kube-api-access-gfslw") pod "af355b7f-362d-4f00-96fb-07f77590df88" (UID: "af355b7f-362d-4f00-96fb-07f77590df88"). InnerVolumeSpecName "kube-api-access-gfslw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.959444 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.959483 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfslw\" (UniqueName: \"kubernetes.io/projected/af355b7f-362d-4f00-96fb-07f77590df88-kube-api-access-gfslw\") on node \"crc\" DevicePath \"\"" Mar 12 13:46:27 crc kubenswrapper[4778]: I0312 13:46:27.993617 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af355b7f-362d-4f00-96fb-07f77590df88" (UID: "af355b7f-362d-4f00-96fb-07f77590df88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.061949 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af355b7f-362d-4f00-96fb-07f77590df88-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.267661 4778 generic.go:334] "Generic (PLEG): container finished" podID="af355b7f-362d-4f00-96fb-07f77590df88" containerID="7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6" exitCode=0 Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.267940 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8q2d" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.274946 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8q2d" event={"ID":"af355b7f-362d-4f00-96fb-07f77590df88","Type":"ContainerDied","Data":"7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6"} Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.275009 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8q2d" event={"ID":"af355b7f-362d-4f00-96fb-07f77590df88","Type":"ContainerDied","Data":"32b7530aa6acd71d36db41ee270d32aa3aa01b7f38534e4287b54333a34c61e6"} Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.275032 4778 scope.go:117] "RemoveContainer" containerID="7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.299259 4778 scope.go:117] "RemoveContainer" containerID="d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.317778 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8q2d"] Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.326620 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l8q2d"] Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.339320 4778 scope.go:117] "RemoveContainer" containerID="d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.376478 4778 scope.go:117] "RemoveContainer" containerID="7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6" Mar 12 13:46:28 crc kubenswrapper[4778]: E0312 13:46:28.377127 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6\": container with ID starting with 7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6 not found: ID does not exist" containerID="7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.377195 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6"} err="failed to get container status \"7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6\": rpc error: code = NotFound desc = could not find container \"7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6\": container with ID starting with 7eb94c616cc863bfc286172d8cd2e64a1dfd8876e97f1d5ec12fdbf103ee91e6 not found: ID does not exist" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.377222 4778 scope.go:117] "RemoveContainer" containerID="d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1" Mar 12 13:46:28 crc kubenswrapper[4778]: E0312 13:46:28.377805 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1\": container with ID starting with d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1 not found: ID does not exist" containerID="d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.377894 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1"} err="failed to get container status \"d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1\": rpc error: code = NotFound desc = could not find container \"d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1\": container with ID starting with d99e4622ec7401bc478f28e049daf1fe07e501d51d99bc26762c1751e79641f1 not found: ID does not exist" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.377945 4778 scope.go:117] "RemoveContainer" containerID="d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f" Mar 12 13:46:28 crc kubenswrapper[4778]: E0312 13:46:28.378554 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f\": container with ID starting with d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f not found: ID does not exist" containerID="d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f" Mar 12 13:46:28 crc kubenswrapper[4778]: I0312 13:46:28.378592 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f"} err="failed to get container status \"d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f\": rpc error: code = NotFound desc = could not find container \"d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f\": container with ID starting with d774e6cca00651044cc9a3c3d37f2a22c2a44d3dee64303589589539c636c16f not found: ID does not exist" Mar 12 13:46:30 crc kubenswrapper[4778]: I0312 13:46:30.264628 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af355b7f-362d-4f00-96fb-07f77590df88" path="/var/lib/kubelet/pods/af355b7f-362d-4f00-96fb-07f77590df88/volumes" Mar 12 13:46:33 crc kubenswrapper[4778]: I0312 13:46:33.323104 4778 generic.go:334] "Generic (PLEG): container finished" podID="29f8609b-4a3b-42ba-9450-a2b633bb4c2c" containerID="6d041d9afe88fdec32af7da1355c78c05ba3b4da3265df6c00f4f7416ed27298" exitCode=0 Mar 12 13:46:33 crc kubenswrapper[4778]: I0312 13:46:33.323151 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" event={"ID":"29f8609b-4a3b-42ba-9450-a2b633bb4c2c","Type":"ContainerDied","Data":"6d041d9afe88fdec32af7da1355c78c05ba3b4da3265df6c00f4f7416ed27298"} Mar 12 13:46:34 crc kubenswrapper[4778]: I0312 13:46:34.832104 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:46:34 crc kubenswrapper[4778]: I0312 13:46:34.920824 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-ssh-key-openstack-edpm-ipam\") pod \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " Mar 12 13:46:34 crc kubenswrapper[4778]: I0312 13:46:34.920986 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zh8c\" (UniqueName: \"kubernetes.io/projected/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-kube-api-access-5zh8c\") pod \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " Mar 12 13:46:34 crc kubenswrapper[4778]: I0312 13:46:34.921015 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-inventory\") pod \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\" (UID: \"29f8609b-4a3b-42ba-9450-a2b633bb4c2c\") " Mar 12 13:46:34 crc kubenswrapper[4778]: I0312 13:46:34.928887 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-kube-api-access-5zh8c" (OuterVolumeSpecName: "kube-api-access-5zh8c") pod "29f8609b-4a3b-42ba-9450-a2b633bb4c2c" (UID: "29f8609b-4a3b-42ba-9450-a2b633bb4c2c"). InnerVolumeSpecName "kube-api-access-5zh8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:46:34 crc kubenswrapper[4778]: I0312 13:46:34.950957 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "29f8609b-4a3b-42ba-9450-a2b633bb4c2c" (UID: "29f8609b-4a3b-42ba-9450-a2b633bb4c2c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:46:34 crc kubenswrapper[4778]: I0312 13:46:34.957501 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-inventory" (OuterVolumeSpecName: "inventory") pod "29f8609b-4a3b-42ba-9450-a2b633bb4c2c" (UID: "29f8609b-4a3b-42ba-9450-a2b633bb4c2c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.024267 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.024303 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zh8c\" (UniqueName: \"kubernetes.io/projected/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-kube-api-access-5zh8c\") on node \"crc\" DevicePath \"\"" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.024315 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29f8609b-4a3b-42ba-9450-a2b633bb4c2c-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.339848 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" event={"ID":"29f8609b-4a3b-42ba-9450-a2b633bb4c2c","Type":"ContainerDied","Data":"73a22b7b256ee030de16c8af304a9e966b8c86513de32e0f1f1efbf8d69bce4e"} Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.339902 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73a22b7b256ee030de16c8af304a9e966b8c86513de32e0f1f1efbf8d69bce4e" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.339994 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-g252n" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.425167 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6"] Mar 12 13:46:35 crc kubenswrapper[4778]: E0312 13:46:35.426588 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6" containerName="oc" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.426677 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6" containerName="oc" Mar 12 13:46:35 crc kubenswrapper[4778]: E0312 13:46:35.426750 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="registry-server" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.426815 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="registry-server" Mar 12 13:46:35 crc kubenswrapper[4778]: E0312 13:46:35.426884 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f8609b-4a3b-42ba-9450-a2b633bb4c2c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.426942 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f8609b-4a3b-42ba-9450-a2b633bb4c2c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:46:35 crc kubenswrapper[4778]: E0312 13:46:35.427013 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="extract-utilities" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.427071 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="extract-utilities" Mar 12 13:46:35 crc kubenswrapper[4778]: E0312 13:46:35.427144 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="extract-content" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.427223 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="extract-content" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.427465 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="af355b7f-362d-4f00-96fb-07f77590df88" containerName="registry-server" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.427539 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6" containerName="oc" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.427613 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f8609b-4a3b-42ba-9450-a2b633bb4c2c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.428290 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.436303 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.436593 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.437989 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.439448 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6"] Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.445964 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.539956 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.540044 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.540089 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74f9\" (UniqueName: \"kubernetes.io/projected/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-kube-api-access-m74f9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.642401 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.642804 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.642879 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74f9\" (UniqueName: \"kubernetes.io/projected/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-kube-api-access-m74f9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.648672 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.649073 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.657992 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74f9\" (UniqueName: \"kubernetes.io/projected/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-kube-api-access-m74f9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:35 crc kubenswrapper[4778]: I0312 13:46:35.749701 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:46:36 crc kubenswrapper[4778]: I0312 13:46:36.281441 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6"] Mar 12 13:46:36 crc kubenswrapper[4778]: I0312 13:46:36.350300 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" event={"ID":"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8","Type":"ContainerStarted","Data":"cf09e698453bb9cd997120f8269dfad3a83af17a3258f80df0e30f8d276af66c"} Mar 12 13:46:37 crc kubenswrapper[4778]: I0312 13:46:37.358900 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" event={"ID":"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8","Type":"ContainerStarted","Data":"e50c9beda337639076d5173ca63e16ac54e3a0b42a8b327eda05f2300a28b3fe"} Mar 12 13:46:37 crc kubenswrapper[4778]: I0312 13:46:37.382537 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" podStartSLOduration=1.8789279159999999 podStartE2EDuration="2.382517518s" podCreationTimestamp="2026-03-12 13:46:35 +0000 UTC" firstStartedPulling="2026-03-12 13:46:36.287096448 +0000 UTC m=+2214.735791834" lastFinishedPulling="2026-03-12 13:46:36.79068604 +0000 UTC m=+2215.239381436" observedRunningTime="2026-03-12 13:46:37.376072354 +0000 UTC m=+2215.824767750" watchObservedRunningTime="2026-03-12 13:46:37.382517518 +0000 UTC m=+2215.831212914" Mar 12 13:46:58 crc kubenswrapper[4778]: I0312 13:46:58.557972 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:46:58 crc kubenswrapper[4778]: I0312 13:46:58.559328 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:47:25 crc kubenswrapper[4778]: I0312 13:47:25.572827 4778 generic.go:334] "Generic (PLEG): container finished" podID="36bb4acd-fab3-4998-a8cd-a6ebcc800fc8" containerID="e50c9beda337639076d5173ca63e16ac54e3a0b42a8b327eda05f2300a28b3fe" exitCode=0 Mar 12 13:47:25 crc kubenswrapper[4778]: I0312 13:47:25.572917 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" event={"ID":"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8","Type":"ContainerDied","Data":"e50c9beda337639076d5173ca63e16ac54e3a0b42a8b327eda05f2300a28b3fe"} Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.008325 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.141400 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m74f9\" (UniqueName: \"kubernetes.io/projected/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-kube-api-access-m74f9\") pod \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.141600 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-inventory\") pod \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.141754 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-ssh-key-openstack-edpm-ipam\") pod \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\" (UID: \"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8\") " Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.150774 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-kube-api-access-m74f9" (OuterVolumeSpecName: "kube-api-access-m74f9") pod "36bb4acd-fab3-4998-a8cd-a6ebcc800fc8" (UID: "36bb4acd-fab3-4998-a8cd-a6ebcc800fc8"). InnerVolumeSpecName "kube-api-access-m74f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.172025 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "36bb4acd-fab3-4998-a8cd-a6ebcc800fc8" (UID: "36bb4acd-fab3-4998-a8cd-a6ebcc800fc8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.172505 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-inventory" (OuterVolumeSpecName: "inventory") pod "36bb4acd-fab3-4998-a8cd-a6ebcc800fc8" (UID: "36bb4acd-fab3-4998-a8cd-a6ebcc800fc8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.243600 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m74f9\" (UniqueName: \"kubernetes.io/projected/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-kube-api-access-m74f9\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.243641 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.243654 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/36bb4acd-fab3-4998-a8cd-a6ebcc800fc8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.593418 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" event={"ID":"36bb4acd-fab3-4998-a8cd-a6ebcc800fc8","Type":"ContainerDied","Data":"cf09e698453bb9cd997120f8269dfad3a83af17a3258f80df0e30f8d276af66c"} Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.593512 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf09e698453bb9cd997120f8269dfad3a83af17a3258f80df0e30f8d276af66c" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.593439 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.694886 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8mmjm"] Mar 12 13:47:27 crc kubenswrapper[4778]: E0312 13:47:27.695355 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bb4acd-fab3-4998-a8cd-a6ebcc800fc8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.695371 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bb4acd-fab3-4998-a8cd-a6ebcc800fc8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.695539 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bb4acd-fab3-4998-a8cd-a6ebcc800fc8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.696160 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.699147 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.699336 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.699458 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.699686 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.714236 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8mmjm"] Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.856328 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.856725 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.857122 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxkkp\" (UniqueName: \"kubernetes.io/projected/c993b33e-6c36-4524-864a-65da461a8e0c-kube-api-access-mxkkp\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.959083 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxkkp\" (UniqueName: \"kubernetes.io/projected/c993b33e-6c36-4524-864a-65da461a8e0c-kube-api-access-mxkkp\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.959217 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.959291 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.965489 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.966953 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:27 crc kubenswrapper[4778]: I0312 13:47:27.979211 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxkkp\" (UniqueName: \"kubernetes.io/projected/c993b33e-6c36-4524-864a-65da461a8e0c-kube-api-access-mxkkp\") pod \"ssh-known-hosts-edpm-deployment-8mmjm\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:28 crc kubenswrapper[4778]: I0312 13:47:28.015397 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:28 crc kubenswrapper[4778]: I0312 13:47:28.526104 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8mmjm"] Mar 12 13:47:28 crc kubenswrapper[4778]: I0312 13:47:28.557618 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:47:28 crc kubenswrapper[4778]: I0312 13:47:28.557695 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:47:28 crc kubenswrapper[4778]: I0312 13:47:28.601891 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" event={"ID":"c993b33e-6c36-4524-864a-65da461a8e0c","Type":"ContainerStarted","Data":"40a927d39fc02c94f89eec5982f76353c83245af8433d34ae91ec6311691714a"} Mar 12 13:47:29 crc kubenswrapper[4778]: I0312 13:47:29.614076 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" event={"ID":"c993b33e-6c36-4524-864a-65da461a8e0c","Type":"ContainerStarted","Data":"4db62d3ee6fc5f306e1af4017f76ced458e8adef04cfa2db62bf3d06afae11ed"} Mar 12 13:47:29 crc kubenswrapper[4778]: I0312 13:47:29.636458 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" podStartSLOduration=2.052265158 podStartE2EDuration="2.636437029s" podCreationTimestamp="2026-03-12 13:47:27 +0000 UTC" firstStartedPulling="2026-03-12 13:47:28.533944187 +0000 UTC m=+2266.982639593" lastFinishedPulling="2026-03-12 13:47:29.118116068 +0000 UTC m=+2267.566811464" observedRunningTime="2026-03-12 13:47:29.628175914 +0000 UTC m=+2268.076871310" watchObservedRunningTime="2026-03-12 13:47:29.636437029 +0000 UTC m=+2268.085132415" Mar 12 13:47:36 crc kubenswrapper[4778]: I0312 13:47:36.676581 4778 generic.go:334] "Generic (PLEG): container finished" podID="c993b33e-6c36-4524-864a-65da461a8e0c" containerID="4db62d3ee6fc5f306e1af4017f76ced458e8adef04cfa2db62bf3d06afae11ed" exitCode=0 Mar 12 13:47:36 crc kubenswrapper[4778]: I0312 13:47:36.676708 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" event={"ID":"c993b33e-6c36-4524-864a-65da461a8e0c","Type":"ContainerDied","Data":"4db62d3ee6fc5f306e1af4017f76ced458e8adef04cfa2db62bf3d06afae11ed"} Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.146072 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.272145 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxkkp\" (UniqueName: \"kubernetes.io/projected/c993b33e-6c36-4524-864a-65da461a8e0c-kube-api-access-mxkkp\") pod \"c993b33e-6c36-4524-864a-65da461a8e0c\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.272313 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-ssh-key-openstack-edpm-ipam\") pod \"c993b33e-6c36-4524-864a-65da461a8e0c\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.272358 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-inventory-0\") pod \"c993b33e-6c36-4524-864a-65da461a8e0c\" (UID: \"c993b33e-6c36-4524-864a-65da461a8e0c\") " Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.277269 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c993b33e-6c36-4524-864a-65da461a8e0c-kube-api-access-mxkkp" (OuterVolumeSpecName: "kube-api-access-mxkkp") pod "c993b33e-6c36-4524-864a-65da461a8e0c" (UID: "c993b33e-6c36-4524-864a-65da461a8e0c"). InnerVolumeSpecName "kube-api-access-mxkkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.299026 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c993b33e-6c36-4524-864a-65da461a8e0c" (UID: "c993b33e-6c36-4524-864a-65da461a8e0c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.304111 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c993b33e-6c36-4524-864a-65da461a8e0c" (UID: "c993b33e-6c36-4524-864a-65da461a8e0c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.375682 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.375734 4778 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c993b33e-6c36-4524-864a-65da461a8e0c-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.375749 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxkkp\" (UniqueName: \"kubernetes.io/projected/c993b33e-6c36-4524-864a-65da461a8e0c-kube-api-access-mxkkp\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.694950 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" event={"ID":"c993b33e-6c36-4524-864a-65da461a8e0c","Type":"ContainerDied","Data":"40a927d39fc02c94f89eec5982f76353c83245af8433d34ae91ec6311691714a"} Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.694992 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a927d39fc02c94f89eec5982f76353c83245af8433d34ae91ec6311691714a" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.695047 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8mmjm" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.781414 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t"] Mar 12 13:47:38 crc kubenswrapper[4778]: E0312 13:47:38.782145 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c993b33e-6c36-4524-864a-65da461a8e0c" containerName="ssh-known-hosts-edpm-deployment" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.782169 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c993b33e-6c36-4524-864a-65da461a8e0c" containerName="ssh-known-hosts-edpm-deployment" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.782441 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c993b33e-6c36-4524-864a-65da461a8e0c" containerName="ssh-known-hosts-edpm-deployment" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.783421 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.785747 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.786006 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.788373 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.788563 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.790617 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t"] Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.987649 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.987963 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:38 crc kubenswrapper[4778]: I0312 13:47:38.988007 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmsd\" (UniqueName: \"kubernetes.io/projected/b0bb06df-44bb-4939-9492-a6ad3d6b5368-kube-api-access-lrmsd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:39 crc kubenswrapper[4778]: I0312 13:47:39.090032 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:39 crc kubenswrapper[4778]: I0312 13:47:39.090089 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:39 crc kubenswrapper[4778]: I0312 13:47:39.090123 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrmsd\" (UniqueName: \"kubernetes.io/projected/b0bb06df-44bb-4939-9492-a6ad3d6b5368-kube-api-access-lrmsd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:39 crc kubenswrapper[4778]: I0312 13:47:39.094091 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:39 crc kubenswrapper[4778]: I0312 13:47:39.094486 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:39 crc kubenswrapper[4778]: I0312 13:47:39.110230 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrmsd\" (UniqueName: \"kubernetes.io/projected/b0bb06df-44bb-4939-9492-a6ad3d6b5368-kube-api-access-lrmsd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-gt58t\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:39 crc kubenswrapper[4778]: I0312 13:47:39.401812 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:39 crc kubenswrapper[4778]: I0312 13:47:39.917054 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t"] Mar 12 13:47:40 crc kubenswrapper[4778]: I0312 13:47:40.718787 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" event={"ID":"b0bb06df-44bb-4939-9492-a6ad3d6b5368","Type":"ContainerStarted","Data":"4406f329a8888ee9cb9a4e349401d009362172ceafbfcf53efeaa42f370ffb2e"} Mar 12 13:47:41 crc kubenswrapper[4778]: I0312 13:47:41.729018 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" event={"ID":"b0bb06df-44bb-4939-9492-a6ad3d6b5368","Type":"ContainerStarted","Data":"2e5a4191c8140186b288301cee49d0ac609d54636f3611c2d402c382f0805fb8"} Mar 12 13:47:41 crc kubenswrapper[4778]: I0312 13:47:41.745766 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" podStartSLOduration=3.157034611 podStartE2EDuration="3.745745831s" podCreationTimestamp="2026-03-12 13:47:38 +0000 UTC" firstStartedPulling="2026-03-12 13:47:39.925609377 +0000 UTC m=+2278.374304773" lastFinishedPulling="2026-03-12 13:47:40.514320597 +0000 UTC m=+2278.963015993" observedRunningTime="2026-03-12 13:47:41.744072493 +0000 UTC m=+2280.192767899" watchObservedRunningTime="2026-03-12 13:47:41.745745831 +0000 UTC m=+2280.194441227" Mar 12 13:47:48 crc kubenswrapper[4778]: I0312 13:47:48.791824 4778 generic.go:334] "Generic (PLEG): container finished" podID="b0bb06df-44bb-4939-9492-a6ad3d6b5368" containerID="2e5a4191c8140186b288301cee49d0ac609d54636f3611c2d402c382f0805fb8" exitCode=0 Mar 12 13:47:48 crc kubenswrapper[4778]: I0312 13:47:48.791916 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" event={"ID":"b0bb06df-44bb-4939-9492-a6ad3d6b5368","Type":"ContainerDied","Data":"2e5a4191c8140186b288301cee49d0ac609d54636f3611c2d402c382f0805fb8"} Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.228268 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.424080 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-inventory\") pod \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.424588 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-ssh-key-openstack-edpm-ipam\") pod \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.424628 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrmsd\" (UniqueName: \"kubernetes.io/projected/b0bb06df-44bb-4939-9492-a6ad3d6b5368-kube-api-access-lrmsd\") pod \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\" (UID: \"b0bb06df-44bb-4939-9492-a6ad3d6b5368\") " Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.431214 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0bb06df-44bb-4939-9492-a6ad3d6b5368-kube-api-access-lrmsd" (OuterVolumeSpecName: "kube-api-access-lrmsd") pod "b0bb06df-44bb-4939-9492-a6ad3d6b5368" (UID: "b0bb06df-44bb-4939-9492-a6ad3d6b5368"). InnerVolumeSpecName "kube-api-access-lrmsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.454455 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-inventory" (OuterVolumeSpecName: "inventory") pod "b0bb06df-44bb-4939-9492-a6ad3d6b5368" (UID: "b0bb06df-44bb-4939-9492-a6ad3d6b5368"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.459619 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b0bb06df-44bb-4939-9492-a6ad3d6b5368" (UID: "b0bb06df-44bb-4939-9492-a6ad3d6b5368"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.529432 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.529468 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrmsd\" (UniqueName: \"kubernetes.io/projected/b0bb06df-44bb-4939-9492-a6ad3d6b5368-kube-api-access-lrmsd\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.529477 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b0bb06df-44bb-4939-9492-a6ad3d6b5368-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.817777 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" event={"ID":"b0bb06df-44bb-4939-9492-a6ad3d6b5368","Type":"ContainerDied","Data":"4406f329a8888ee9cb9a4e349401d009362172ceafbfcf53efeaa42f370ffb2e"} Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.817844 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4406f329a8888ee9cb9a4e349401d009362172ceafbfcf53efeaa42f370ffb2e" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.817855 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-gt58t" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.914076 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc"] Mar 12 13:47:50 crc kubenswrapper[4778]: E0312 13:47:50.914778 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0bb06df-44bb-4939-9492-a6ad3d6b5368" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.914808 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0bb06df-44bb-4939-9492-a6ad3d6b5368" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.915051 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0bb06df-44bb-4939-9492-a6ad3d6b5368" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.915904 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.918279 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.919455 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.919611 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.919649 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:47:50 crc kubenswrapper[4778]: I0312 13:47:50.925619 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc"] Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.039046 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.039444 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqrq9\" (UniqueName: \"kubernetes.io/projected/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-kube-api-access-rqrq9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.039481 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.140778 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.140840 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqrq9\" (UniqueName: \"kubernetes.io/projected/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-kube-api-access-rqrq9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.140864 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.145440 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.146614 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.159385 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqrq9\" (UniqueName: \"kubernetes.io/projected/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-kube-api-access-rqrq9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.237446 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.738839 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc"] Mar 12 13:47:51 crc kubenswrapper[4778]: I0312 13:47:51.828136 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" event={"ID":"43a3ffe4-8b64-4e26-b63a-5254a986e4a4","Type":"ContainerStarted","Data":"feca20715b50a556b5694815d65a26e8fe73431d82cde071b3fb63cd519e73aa"} Mar 12 13:47:52 crc kubenswrapper[4778]: I0312 13:47:52.839418 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" event={"ID":"43a3ffe4-8b64-4e26-b63a-5254a986e4a4","Type":"ContainerStarted","Data":"7e12694d9161e0f60ee30d919973a8d39ab3d9a3f7092129e17f862f3f4116b9"} Mar 12 13:47:52 crc kubenswrapper[4778]: I0312 13:47:52.858671 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" podStartSLOduration=2.3163973909999998 podStartE2EDuration="2.858651852s" podCreationTimestamp="2026-03-12 13:47:50 +0000 UTC" firstStartedPulling="2026-03-12 13:47:51.739015674 +0000 UTC m=+2290.187711070" lastFinishedPulling="2026-03-12 13:47:52.281270135 +0000 UTC m=+2290.729965531" observedRunningTime="2026-03-12 13:47:52.855152573 +0000 UTC m=+2291.303847979" watchObservedRunningTime="2026-03-12 13:47:52.858651852 +0000 UTC m=+2291.307347248" Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.558585 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.559192 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.559274 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.560293 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.560370 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" gracePeriod=600 Mar 12 13:47:58 crc kubenswrapper[4778]: E0312 13:47:58.683345 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.891032 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" exitCode=0 Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.891100 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f"} Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.891467 4778 scope.go:117] "RemoveContainer" containerID="92d3dad2e98d7139cb748a76fe93295a7064a4a757626bc932a272018a133968" Mar 12 13:47:58 crc kubenswrapper[4778]: I0312 13:47:58.892132 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:47:58 crc kubenswrapper[4778]: E0312 13:47:58.892493 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.132379 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555388-5mnjh"] Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.134788 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555388-5mnjh" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.137293 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.139702 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.140146 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.155314 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555388-5mnjh"] Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.207990 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhkb\" (UniqueName: \"kubernetes.io/projected/110071e6-5231-434c-af16-87b68a3d0c8f-kube-api-access-pqhkb\") pod \"auto-csr-approver-29555388-5mnjh\" (UID: \"110071e6-5231-434c-af16-87b68a3d0c8f\") " pod="openshift-infra/auto-csr-approver-29555388-5mnjh" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.310285 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhkb\" (UniqueName: \"kubernetes.io/projected/110071e6-5231-434c-af16-87b68a3d0c8f-kube-api-access-pqhkb\") pod \"auto-csr-approver-29555388-5mnjh\" (UID: \"110071e6-5231-434c-af16-87b68a3d0c8f\") " pod="openshift-infra/auto-csr-approver-29555388-5mnjh" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.331375 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhkb\" (UniqueName: \"kubernetes.io/projected/110071e6-5231-434c-af16-87b68a3d0c8f-kube-api-access-pqhkb\") pod \"auto-csr-approver-29555388-5mnjh\" (UID: \"110071e6-5231-434c-af16-87b68a3d0c8f\") " pod="openshift-infra/auto-csr-approver-29555388-5mnjh" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.457982 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555388-5mnjh" Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.891998 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555388-5mnjh"] Mar 12 13:48:00 crc kubenswrapper[4778]: I0312 13:48:00.912559 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555388-5mnjh" event={"ID":"110071e6-5231-434c-af16-87b68a3d0c8f","Type":"ContainerStarted","Data":"f9bbce7ec9ea3b75de4f987d83a468acfd1a280c283c8c43f14d3ba69def0c98"} Mar 12 13:48:01 crc kubenswrapper[4778]: I0312 13:48:01.921713 4778 generic.go:334] "Generic (PLEG): container finished" podID="43a3ffe4-8b64-4e26-b63a-5254a986e4a4" containerID="7e12694d9161e0f60ee30d919973a8d39ab3d9a3f7092129e17f862f3f4116b9" exitCode=0 Mar 12 13:48:01 crc kubenswrapper[4778]: I0312 13:48:01.921805 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" event={"ID":"43a3ffe4-8b64-4e26-b63a-5254a986e4a4","Type":"ContainerDied","Data":"7e12694d9161e0f60ee30d919973a8d39ab3d9a3f7092129e17f862f3f4116b9"} Mar 12 13:48:02 crc kubenswrapper[4778]: I0312 13:48:02.932108 4778 generic.go:334] "Generic (PLEG): container finished" podID="110071e6-5231-434c-af16-87b68a3d0c8f" containerID="2b6df075041c6c1583e329716aacbd0c53d1a64cadc9905cc6ddb1e1bd9b676d" exitCode=0 Mar 12 13:48:02 crc kubenswrapper[4778]: I0312 13:48:02.932260 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555388-5mnjh" event={"ID":"110071e6-5231-434c-af16-87b68a3d0c8f","Type":"ContainerDied","Data":"2b6df075041c6c1583e329716aacbd0c53d1a64cadc9905cc6ddb1e1bd9b676d"} Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.364155 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.483785 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-inventory\") pod \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.483881 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqrq9\" (UniqueName: \"kubernetes.io/projected/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-kube-api-access-rqrq9\") pod \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.483971 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-ssh-key-openstack-edpm-ipam\") pod \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\" (UID: \"43a3ffe4-8b64-4e26-b63a-5254a986e4a4\") " Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.492001 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-kube-api-access-rqrq9" (OuterVolumeSpecName: "kube-api-access-rqrq9") pod "43a3ffe4-8b64-4e26-b63a-5254a986e4a4" (UID: "43a3ffe4-8b64-4e26-b63a-5254a986e4a4"). InnerVolumeSpecName "kube-api-access-rqrq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.515252 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43a3ffe4-8b64-4e26-b63a-5254a986e4a4" (UID: "43a3ffe4-8b64-4e26-b63a-5254a986e4a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.526717 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-inventory" (OuterVolumeSpecName: "inventory") pod "43a3ffe4-8b64-4e26-b63a-5254a986e4a4" (UID: "43a3ffe4-8b64-4e26-b63a-5254a986e4a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.586564 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.586592 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqrq9\" (UniqueName: \"kubernetes.io/projected/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-kube-api-access-rqrq9\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.586605 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43a3ffe4-8b64-4e26-b63a-5254a986e4a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.946827 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" event={"ID":"43a3ffe4-8b64-4e26-b63a-5254a986e4a4","Type":"ContainerDied","Data":"feca20715b50a556b5694815d65a26e8fe73431d82cde071b3fb63cd519e73aa"} Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.946879 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feca20715b50a556b5694815d65a26e8fe73431d82cde071b3fb63cd519e73aa" Mar 12 13:48:03 crc kubenswrapper[4778]: I0312 13:48:03.946842 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.075697 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx"] Mar 12 13:48:04 crc kubenswrapper[4778]: E0312 13:48:04.076298 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a3ffe4-8b64-4e26-b63a-5254a986e4a4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.076323 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a3ffe4-8b64-4e26-b63a-5254a986e4a4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.076607 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a3ffe4-8b64-4e26-b63a-5254a986e4a4" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.077489 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.083484 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.083718 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.084099 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.084215 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-custom-default-certs-0" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.084577 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.084708 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.084874 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.084986 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.095413 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx"] Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.201762 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.201939 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202097 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202197 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202233 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202317 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202645 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202679 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202825 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202868 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202955 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-neutron-metadata-custom-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.202981 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.203075 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl8pb\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-kube-api-access-bl8pb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.203103 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.305102 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306013 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306058 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-neutron-metadata-custom-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306078 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306118 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl8pb\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-kube-api-access-bl8pb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306136 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306235 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306270 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306314 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306342 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306360 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306389 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306451 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.306470 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.311655 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.311836 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.311976 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.311283 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.312755 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.312898 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.313387 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-neutron-metadata-custom-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.313858 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.314565 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.315264 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.315912 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.316127 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.325917 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.329153 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl8pb\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-kube-api-access-bl8pb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bngcx\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.407697 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.421943 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555388-5mnjh" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.511696 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqhkb\" (UniqueName: \"kubernetes.io/projected/110071e6-5231-434c-af16-87b68a3d0c8f-kube-api-access-pqhkb\") pod \"110071e6-5231-434c-af16-87b68a3d0c8f\" (UID: \"110071e6-5231-434c-af16-87b68a3d0c8f\") " Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.516054 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110071e6-5231-434c-af16-87b68a3d0c8f-kube-api-access-pqhkb" (OuterVolumeSpecName: "kube-api-access-pqhkb") pod "110071e6-5231-434c-af16-87b68a3d0c8f" (UID: "110071e6-5231-434c-af16-87b68a3d0c8f"). InnerVolumeSpecName "kube-api-access-pqhkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.614099 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqhkb\" (UniqueName: \"kubernetes.io/projected/110071e6-5231-434c-af16-87b68a3d0c8f-kube-api-access-pqhkb\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.933541 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx"] Mar 12 13:48:04 crc kubenswrapper[4778]: W0312 13:48:04.934855 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69e6cfe_f7c2_4127_b4df_710725c52227.slice/crio-201137bf7718cef671d660ab8f7e18f0a7ed8f9c84776ebbb382aa1672d08d37 WatchSource:0}: Error finding container 201137bf7718cef671d660ab8f7e18f0a7ed8f9c84776ebbb382aa1672d08d37: Status 404 returned error can't find the container with id 201137bf7718cef671d660ab8f7e18f0a7ed8f9c84776ebbb382aa1672d08d37 Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.965468 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555388-5mnjh" event={"ID":"110071e6-5231-434c-af16-87b68a3d0c8f","Type":"ContainerDied","Data":"f9bbce7ec9ea3b75de4f987d83a468acfd1a280c283c8c43f14d3ba69def0c98"} Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.965518 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9bbce7ec9ea3b75de4f987d83a468acfd1a280c283c8c43f14d3ba69def0c98" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.965515 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555388-5mnjh" Mar 12 13:48:04 crc kubenswrapper[4778]: I0312 13:48:04.966469 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" event={"ID":"f69e6cfe-f7c2-4127-b4df-710725c52227","Type":"ContainerStarted","Data":"201137bf7718cef671d660ab8f7e18f0a7ed8f9c84776ebbb382aa1672d08d37"} Mar 12 13:48:05 crc kubenswrapper[4778]: I0312 13:48:05.550697 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555382-zbkfk"] Mar 12 13:48:05 crc kubenswrapper[4778]: I0312 13:48:05.559804 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555382-zbkfk"] Mar 12 13:48:05 crc kubenswrapper[4778]: I0312 13:48:05.989067 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" event={"ID":"f69e6cfe-f7c2-4127-b4df-710725c52227","Type":"ContainerStarted","Data":"1b6058478ca276578c95fa23d5a23fc397088e6b19d0e6a8d4aa362015a60ea0"} Mar 12 13:48:06 crc kubenswrapper[4778]: I0312 13:48:06.025127 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" podStartSLOduration=1.399275345 podStartE2EDuration="2.025098249s" podCreationTimestamp="2026-03-12 13:48:04 +0000 UTC" firstStartedPulling="2026-03-12 13:48:04.938693224 +0000 UTC m=+2303.387388620" lastFinishedPulling="2026-03-12 13:48:05.564516128 +0000 UTC m=+2304.013211524" observedRunningTime="2026-03-12 13:48:06.018322137 +0000 UTC m=+2304.467017553" watchObservedRunningTime="2026-03-12 13:48:06.025098249 +0000 UTC m=+2304.473793645" Mar 12 13:48:06 crc kubenswrapper[4778]: I0312 13:48:06.277140 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832c789c-468c-400b-8d55-3072443e85ec" path="/var/lib/kubelet/pods/832c789c-468c-400b-8d55-3072443e85ec/volumes" Mar 12 13:48:13 crc kubenswrapper[4778]: I0312 13:48:13.254827 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:48:13 crc kubenswrapper[4778]: E0312 13:48:13.255524 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:48:19 crc kubenswrapper[4778]: I0312 13:48:19.198826 4778 scope.go:117] "RemoveContainer" containerID="7785d6a0c6670e984508e3f9d5cc59f211b972f130207a3fed5c63411c140ddc" Mar 12 13:48:26 crc kubenswrapper[4778]: I0312 13:48:26.255021 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:48:26 crc kubenswrapper[4778]: E0312 13:48:26.255594 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:48:37 crc kubenswrapper[4778]: I0312 13:48:37.254576 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:48:37 crc kubenswrapper[4778]: E0312 13:48:37.255433 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:48:42 crc kubenswrapper[4778]: I0312 13:48:42.594536 4778 generic.go:334] "Generic (PLEG): container finished" podID="f69e6cfe-f7c2-4127-b4df-710725c52227" containerID="1b6058478ca276578c95fa23d5a23fc397088e6b19d0e6a8d4aa362015a60ea0" exitCode=0 Mar 12 13:48:42 crc kubenswrapper[4778]: I0312 13:48:42.594582 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" event={"ID":"f69e6cfe-f7c2-4127-b4df-710725c52227","Type":"ContainerDied","Data":"1b6058478ca276578c95fa23d5a23fc397088e6b19d0e6a8d4aa362015a60ea0"} Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.046057 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.085710 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ssh-key-openstack-edpm-ipam\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.085985 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-repo-setup-combined-ca-bundle\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086091 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-inventory\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086204 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-bootstrap-combined-ca-bundle\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086290 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086367 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086444 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-telemetry-combined-ca-bundle\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086534 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-nova-combined-ca-bundle\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086628 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl8pb\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-kube-api-access-bl8pb\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086741 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-neutron-metadata-custom-combined-ca-bundle\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086815 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-libvirt-combined-ca-bundle\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.086891 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.094061 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.097302 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.097509 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-neutron-metadata-custom-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-custom-default-certs-0") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-custom-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.097552 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-kube-api-access-bl8pb" (OuterVolumeSpecName: "kube-api-access-bl8pb") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "kube-api-access-bl8pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.098088 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.098165 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.099526 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.100511 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.100607 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-neutron-metadata-custom-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-custom-combined-ca-bundle") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "neutron-metadata-custom-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.101338 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.128112 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.128593 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-inventory" (OuterVolumeSpecName: "inventory") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188006 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188390 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ovn-combined-ca-bundle\") pod \"f69e6cfe-f7c2-4127-b4df-710725c52227\" (UID: \"f69e6cfe-f7c2-4127-b4df-710725c52227\") " Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188776 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188802 4778 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188817 4778 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188832 4778 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-neutron-metadata-custom-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188847 4778 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188861 4778 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188871 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl8pb\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-kube-api-access-bl8pb\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188882 4778 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188893 4778 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188908 4778 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-neutron-metadata-custom-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188919 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.188929 4778 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.191511 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.191832 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f69e6cfe-f7c2-4127-b4df-710725c52227" (UID: "f69e6cfe-f7c2-4127-b4df-710725c52227"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.291991 4778 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f69e6cfe-f7c2-4127-b4df-710725c52227-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.292041 4778 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69e6cfe-f7c2-4127-b4df-710725c52227-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.802343 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" event={"ID":"f69e6cfe-f7c2-4127-b4df-710725c52227","Type":"ContainerDied","Data":"201137bf7718cef671d660ab8f7e18f0a7ed8f9c84776ebbb382aa1672d08d37"} Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.802387 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201137bf7718cef671d660ab8f7e18f0a7ed8f9c84776ebbb382aa1672d08d37" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.802478 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bngcx" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.978988 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq"] Mar 12 13:48:44 crc kubenswrapper[4778]: E0312 13:48:44.979441 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110071e6-5231-434c-af16-87b68a3d0c8f" containerName="oc" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.979464 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="110071e6-5231-434c-af16-87b68a3d0c8f" containerName="oc" Mar 12 13:48:44 crc kubenswrapper[4778]: E0312 13:48:44.979487 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69e6cfe-f7c2-4127-b4df-710725c52227" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.979499 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69e6cfe-f7c2-4127-b4df-710725c52227" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.979757 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="110071e6-5231-434c-af16-87b68a3d0c8f" containerName="oc" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.979798 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69e6cfe-f7c2-4127-b4df-710725c52227" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.980662 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.983649 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.984005 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.984499 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.984604 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.985197 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:48:44 crc kubenswrapper[4778]: I0312 13:48:44.995301 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq"] Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.088436 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.088685 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbdc\" (UniqueName: \"kubernetes.io/projected/3c0a2200-506d-4ac3-b08c-9b3156c9e573-kube-api-access-9wbdc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.088840 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.088905 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.088940 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.190641 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.190739 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbdc\" (UniqueName: \"kubernetes.io/projected/3c0a2200-506d-4ac3-b08c-9b3156c9e573-kube-api-access-9wbdc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.190815 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.190846 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.190875 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.191881 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.195306 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.195548 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.210233 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.211502 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbdc\" (UniqueName: \"kubernetes.io/projected/3c0a2200-506d-4ac3-b08c-9b3156c9e573-kube-api-access-9wbdc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9lbdq\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:45 crc kubenswrapper[4778]: I0312 13:48:45.304590 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:48:46 crc kubenswrapper[4778]: I0312 13:48:46.108618 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq"] Mar 12 13:48:47 crc kubenswrapper[4778]: I0312 13:48:47.063411 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" event={"ID":"3c0a2200-506d-4ac3-b08c-9b3156c9e573","Type":"ContainerStarted","Data":"9284931a0ee9d2beed15173491cfa2f871463185d1221d834da82de0b7e7f86f"} Mar 12 13:48:48 crc kubenswrapper[4778]: I0312 13:48:48.255212 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:48:48 crc kubenswrapper[4778]: E0312 13:48:48.255840 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:48:48 crc kubenswrapper[4778]: I0312 13:48:48.269927 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" event={"ID":"3c0a2200-506d-4ac3-b08c-9b3156c9e573","Type":"ContainerStarted","Data":"034471fb3d1e6422e07aa4976640757d8f19ab32e4556431d2f747552958f007"} Mar 12 13:48:48 crc kubenswrapper[4778]: I0312 13:48:48.291348 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" podStartSLOduration=3.422887011 podStartE2EDuration="4.291323284s" podCreationTimestamp="2026-03-12 13:48:44 +0000 UTC" firstStartedPulling="2026-03-12 13:48:46.114847401 +0000 UTC m=+2344.563542797" lastFinishedPulling="2026-03-12 13:48:46.983283674 +0000 UTC m=+2345.431979070" observedRunningTime="2026-03-12 13:48:48.284217322 +0000 UTC m=+2346.732912718" watchObservedRunningTime="2026-03-12 13:48:48.291323284 +0000 UTC m=+2346.740018680" Mar 12 13:49:01 crc kubenswrapper[4778]: I0312 13:49:01.254083 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:49:01 crc kubenswrapper[4778]: E0312 13:49:01.255047 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:49:12 crc kubenswrapper[4778]: I0312 13:49:12.253916 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:49:12 crc kubenswrapper[4778]: E0312 13:49:12.254839 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:49:25 crc kubenswrapper[4778]: I0312 13:49:25.254445 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:49:25 crc kubenswrapper[4778]: E0312 13:49:25.255417 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:49:40 crc kubenswrapper[4778]: I0312 13:49:40.254075 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:49:40 crc kubenswrapper[4778]: E0312 13:49:40.254906 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:49:54 crc kubenswrapper[4778]: I0312 13:49:54.254281 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:49:54 crc kubenswrapper[4778]: E0312 13:49:54.254967 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:49:54 crc kubenswrapper[4778]: I0312 13:49:54.270083 4778 generic.go:334] "Generic (PLEG): container finished" podID="3c0a2200-506d-4ac3-b08c-9b3156c9e573" containerID="034471fb3d1e6422e07aa4976640757d8f19ab32e4556431d2f747552958f007" exitCode=0 Mar 12 13:49:54 crc kubenswrapper[4778]: I0312 13:49:54.270123 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" event={"ID":"3c0a2200-506d-4ac3-b08c-9b3156c9e573","Type":"ContainerDied","Data":"034471fb3d1e6422e07aa4976640757d8f19ab32e4556431d2f747552958f007"} Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.725484 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.869738 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-inventory\") pod \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.869852 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovncontroller-config-0\") pod \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.869963 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovn-combined-ca-bundle\") pod \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.870023 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ssh-key-openstack-edpm-ipam\") pod \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.870043 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wbdc\" (UniqueName: \"kubernetes.io/projected/3c0a2200-506d-4ac3-b08c-9b3156c9e573-kube-api-access-9wbdc\") pod \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\" (UID: \"3c0a2200-506d-4ac3-b08c-9b3156c9e573\") " Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.876847 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0a2200-506d-4ac3-b08c-9b3156c9e573-kube-api-access-9wbdc" (OuterVolumeSpecName: "kube-api-access-9wbdc") pod "3c0a2200-506d-4ac3-b08c-9b3156c9e573" (UID: "3c0a2200-506d-4ac3-b08c-9b3156c9e573"). InnerVolumeSpecName "kube-api-access-9wbdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.881493 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3c0a2200-506d-4ac3-b08c-9b3156c9e573" (UID: "3c0a2200-506d-4ac3-b08c-9b3156c9e573"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.902843 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-inventory" (OuterVolumeSpecName: "inventory") pod "3c0a2200-506d-4ac3-b08c-9b3156c9e573" (UID: "3c0a2200-506d-4ac3-b08c-9b3156c9e573"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.904088 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3c0a2200-506d-4ac3-b08c-9b3156c9e573" (UID: "3c0a2200-506d-4ac3-b08c-9b3156c9e573"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.906031 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3c0a2200-506d-4ac3-b08c-9b3156c9e573" (UID: "3c0a2200-506d-4ac3-b08c-9b3156c9e573"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.972799 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.972990 4778 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.973067 4778 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.973127 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c0a2200-506d-4ac3-b08c-9b3156c9e573-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:49:55 crc kubenswrapper[4778]: I0312 13:49:55.973203 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wbdc\" (UniqueName: \"kubernetes.io/projected/3c0a2200-506d-4ac3-b08c-9b3156c9e573-kube-api-access-9wbdc\") on node \"crc\" DevicePath \"\"" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.308866 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" event={"ID":"3c0a2200-506d-4ac3-b08c-9b3156c9e573","Type":"ContainerDied","Data":"9284931a0ee9d2beed15173491cfa2f871463185d1221d834da82de0b7e7f86f"} Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.308912 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9284931a0ee9d2beed15173491cfa2f871463185d1221d834da82de0b7e7f86f" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.308914 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9lbdq" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.384736 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg"] Mar 12 13:49:56 crc kubenswrapper[4778]: E0312 13:49:56.385155 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0a2200-506d-4ac3-b08c-9b3156c9e573" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.385175 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0a2200-506d-4ac3-b08c-9b3156c9e573" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.385433 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0a2200-506d-4ac3-b08c-9b3156c9e573" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.386118 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.476494 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.476810 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.476957 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.477137 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-metadata-neutron-config" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.477298 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.477430 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.499504 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg"] Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.585860 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.586039 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8wvt\" (UniqueName: \"kubernetes.io/projected/5cc410de-5b42-44d1-8b29-37161475730e-kube-api-access-g8wvt\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.586348 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.586482 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-metadata-custom-combined-ca-bundle\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.586789 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-inventory\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.586913 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-nova-cell1-metadata-neutron-config-0\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.689233 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8wvt\" (UniqueName: \"kubernetes.io/projected/5cc410de-5b42-44d1-8b29-37161475730e-kube-api-access-g8wvt\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.689357 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.689396 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-metadata-custom-combined-ca-bundle\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.689551 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-inventory\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.689615 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-nova-cell1-metadata-neutron-config-0\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.689679 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.695097 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.695444 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-nova-cell1-metadata-neutron-config-0\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.695922 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-metadata-custom-combined-ca-bundle\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.698253 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.704781 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-inventory\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.709793 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8wvt\" (UniqueName: \"kubernetes.io/projected/5cc410de-5b42-44d1-8b29-37161475730e-kube-api-access-g8wvt\") pod \"neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:56 crc kubenswrapper[4778]: I0312 13:49:56.797816 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:49:57 crc kubenswrapper[4778]: I0312 13:49:57.372841 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg"] Mar 12 13:49:57 crc kubenswrapper[4778]: I0312 13:49:57.377311 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:49:58 crc kubenswrapper[4778]: I0312 13:49:58.330123 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" event={"ID":"5cc410de-5b42-44d1-8b29-37161475730e","Type":"ContainerStarted","Data":"8d307b414f96fb01f36bf7a3a773e62c07543e84217803c38e669173f219bc57"} Mar 12 13:49:58 crc kubenswrapper[4778]: I0312 13:49:58.330588 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" event={"ID":"5cc410de-5b42-44d1-8b29-37161475730e","Type":"ContainerStarted","Data":"df0c491452d065b345698565a278b9fb265300119cb4f52d617e573795f6237b"} Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.132740 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" podStartSLOduration=3.540319285 podStartE2EDuration="4.132715181s" podCreationTimestamp="2026-03-12 13:49:56 +0000 UTC" firstStartedPulling="2026-03-12 13:49:57.377045658 +0000 UTC m=+2415.825741054" lastFinishedPulling="2026-03-12 13:49:57.969441564 +0000 UTC m=+2416.418136950" observedRunningTime="2026-03-12 13:49:58.354427549 +0000 UTC m=+2416.803122945" watchObservedRunningTime="2026-03-12 13:50:00.132715181 +0000 UTC m=+2418.581410577" Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.136310 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555390-dml9r"] Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.138404 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555390-dml9r" Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.141163 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.143534 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.152327 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.154136 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555390-dml9r"] Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.162010 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgll9\" (UniqueName: \"kubernetes.io/projected/8b911b49-7b0f-48ef-9626-cd43d308d596-kube-api-access-fgll9\") pod \"auto-csr-approver-29555390-dml9r\" (UID: \"8b911b49-7b0f-48ef-9626-cd43d308d596\") " pod="openshift-infra/auto-csr-approver-29555390-dml9r" Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.264406 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgll9\" (UniqueName: \"kubernetes.io/projected/8b911b49-7b0f-48ef-9626-cd43d308d596-kube-api-access-fgll9\") pod \"auto-csr-approver-29555390-dml9r\" (UID: \"8b911b49-7b0f-48ef-9626-cd43d308d596\") " pod="openshift-infra/auto-csr-approver-29555390-dml9r" Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.293687 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgll9\" (UniqueName: \"kubernetes.io/projected/8b911b49-7b0f-48ef-9626-cd43d308d596-kube-api-access-fgll9\") pod \"auto-csr-approver-29555390-dml9r\" (UID: \"8b911b49-7b0f-48ef-9626-cd43d308d596\") " pod="openshift-infra/auto-csr-approver-29555390-dml9r" Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.466900 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555390-dml9r" Mar 12 13:50:00 crc kubenswrapper[4778]: W0312 13:50:00.926763 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b911b49_7b0f_48ef_9626_cd43d308d596.slice/crio-2e2a7da2c9fca297f720655456df247f407a38064448b8b331c8699bf971e8a8 WatchSource:0}: Error finding container 2e2a7da2c9fca297f720655456df247f407a38064448b8b331c8699bf971e8a8: Status 404 returned error can't find the container with id 2e2a7da2c9fca297f720655456df247f407a38064448b8b331c8699bf971e8a8 Mar 12 13:50:00 crc kubenswrapper[4778]: I0312 13:50:00.927034 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555390-dml9r"] Mar 12 13:50:01 crc kubenswrapper[4778]: I0312 13:50:01.354843 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555390-dml9r" event={"ID":"8b911b49-7b0f-48ef-9626-cd43d308d596","Type":"ContainerStarted","Data":"2e2a7da2c9fca297f720655456df247f407a38064448b8b331c8699bf971e8a8"} Mar 12 13:50:03 crc kubenswrapper[4778]: I0312 13:50:03.374257 4778 generic.go:334] "Generic (PLEG): container finished" podID="8b911b49-7b0f-48ef-9626-cd43d308d596" containerID="63aaec5f1f507e8a81d7498ca66c0663cdb9bde37e98025da0e464b4ce6c885e" exitCode=0 Mar 12 13:50:03 crc kubenswrapper[4778]: I0312 13:50:03.374303 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555390-dml9r" event={"ID":"8b911b49-7b0f-48ef-9626-cd43d308d596","Type":"ContainerDied","Data":"63aaec5f1f507e8a81d7498ca66c0663cdb9bde37e98025da0e464b4ce6c885e"} Mar 12 13:50:04 crc kubenswrapper[4778]: I0312 13:50:04.700444 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555390-dml9r" Mar 12 13:50:04 crc kubenswrapper[4778]: I0312 13:50:04.846585 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgll9\" (UniqueName: \"kubernetes.io/projected/8b911b49-7b0f-48ef-9626-cd43d308d596-kube-api-access-fgll9\") pod \"8b911b49-7b0f-48ef-9626-cd43d308d596\" (UID: \"8b911b49-7b0f-48ef-9626-cd43d308d596\") " Mar 12 13:50:04 crc kubenswrapper[4778]: I0312 13:50:04.854434 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b911b49-7b0f-48ef-9626-cd43d308d596-kube-api-access-fgll9" (OuterVolumeSpecName: "kube-api-access-fgll9") pod "8b911b49-7b0f-48ef-9626-cd43d308d596" (UID: "8b911b49-7b0f-48ef-9626-cd43d308d596"). InnerVolumeSpecName "kube-api-access-fgll9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:50:04 crc kubenswrapper[4778]: I0312 13:50:04.949670 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgll9\" (UniqueName: \"kubernetes.io/projected/8b911b49-7b0f-48ef-9626-cd43d308d596-kube-api-access-fgll9\") on node \"crc\" DevicePath \"\"" Mar 12 13:50:05 crc kubenswrapper[4778]: I0312 13:50:05.393504 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555390-dml9r" event={"ID":"8b911b49-7b0f-48ef-9626-cd43d308d596","Type":"ContainerDied","Data":"2e2a7da2c9fca297f720655456df247f407a38064448b8b331c8699bf971e8a8"} Mar 12 13:50:05 crc kubenswrapper[4778]: I0312 13:50:05.393551 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e2a7da2c9fca297f720655456df247f407a38064448b8b331c8699bf971e8a8" Mar 12 13:50:05 crc kubenswrapper[4778]: I0312 13:50:05.393585 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555390-dml9r" Mar 12 13:50:05 crc kubenswrapper[4778]: I0312 13:50:05.774574 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555384-znhr8"] Mar 12 13:50:05 crc kubenswrapper[4778]: I0312 13:50:05.784599 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555384-znhr8"] Mar 12 13:50:06 crc kubenswrapper[4778]: I0312 13:50:06.264643 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70dc8f5a-da90-4090-b630-a6a7bd438f64" path="/var/lib/kubelet/pods/70dc8f5a-da90-4090-b630-a6a7bd438f64/volumes" Mar 12 13:50:07 crc kubenswrapper[4778]: I0312 13:50:07.254263 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:50:07 crc kubenswrapper[4778]: E0312 13:50:07.254628 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:50:19 crc kubenswrapper[4778]: I0312 13:50:19.553426 4778 scope.go:117] "RemoveContainer" containerID="e97aad250ae3960e7483df5290e0221b9fbbbe6a75ec4afcb92fd5c46ee60b01" Mar 12 13:50:21 crc kubenswrapper[4778]: I0312 13:50:21.254553 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:50:21 crc kubenswrapper[4778]: E0312 13:50:21.255400 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:50:33 crc kubenswrapper[4778]: I0312 13:50:33.253749 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:50:33 crc kubenswrapper[4778]: E0312 13:50:33.254350 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:50:47 crc kubenswrapper[4778]: I0312 13:50:47.254054 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:50:47 crc kubenswrapper[4778]: E0312 13:50:47.254882 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:50:51 crc kubenswrapper[4778]: I0312 13:50:51.385801 4778 generic.go:334] "Generic (PLEG): container finished" podID="5cc410de-5b42-44d1-8b29-37161475730e" containerID="8d307b414f96fb01f36bf7a3a773e62c07543e84217803c38e669173f219bc57" exitCode=0 Mar 12 13:50:51 crc kubenswrapper[4778]: I0312 13:50:51.385897 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" event={"ID":"5cc410de-5b42-44d1-8b29-37161475730e","Type":"ContainerDied","Data":"8d307b414f96fb01f36bf7a3a773e62c07543e84217803c38e669173f219bc57"} Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.835639 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.948058 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-inventory\") pod \"5cc410de-5b42-44d1-8b29-37161475730e\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.948176 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-nova-cell1-metadata-neutron-config-0\") pod \"5cc410de-5b42-44d1-8b29-37161475730e\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.948226 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5cc410de-5b42-44d1-8b29-37161475730e\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.948328 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-metadata-custom-combined-ca-bundle\") pod \"5cc410de-5b42-44d1-8b29-37161475730e\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.948372 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-ssh-key-openstack-edpm-ipam\") pod \"5cc410de-5b42-44d1-8b29-37161475730e\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.948406 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8wvt\" (UniqueName: \"kubernetes.io/projected/5cc410de-5b42-44d1-8b29-37161475730e-kube-api-access-g8wvt\") pod \"5cc410de-5b42-44d1-8b29-37161475730e\" (UID: \"5cc410de-5b42-44d1-8b29-37161475730e\") " Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.954065 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-metadata-custom-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-custom-combined-ca-bundle") pod "5cc410de-5b42-44d1-8b29-37161475730e" (UID: "5cc410de-5b42-44d1-8b29-37161475730e"). InnerVolumeSpecName "neutron-metadata-custom-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.954618 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cc410de-5b42-44d1-8b29-37161475730e-kube-api-access-g8wvt" (OuterVolumeSpecName: "kube-api-access-g8wvt") pod "5cc410de-5b42-44d1-8b29-37161475730e" (UID: "5cc410de-5b42-44d1-8b29-37161475730e"). InnerVolumeSpecName "kube-api-access-g8wvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.975806 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5cc410de-5b42-44d1-8b29-37161475730e" (UID: "5cc410de-5b42-44d1-8b29-37161475730e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.976144 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5cc410de-5b42-44d1-8b29-37161475730e" (UID: "5cc410de-5b42-44d1-8b29-37161475730e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.982428 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-inventory" (OuterVolumeSpecName: "inventory") pod "5cc410de-5b42-44d1-8b29-37161475730e" (UID: "5cc410de-5b42-44d1-8b29-37161475730e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:50:52 crc kubenswrapper[4778]: I0312 13:50:52.982868 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-nova-cell1-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-cell1-metadata-neutron-config-0") pod "5cc410de-5b42-44d1-8b29-37161475730e" (UID: "5cc410de-5b42-44d1-8b29-37161475730e"). InnerVolumeSpecName "nova-cell1-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.051419 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8wvt\" (UniqueName: \"kubernetes.io/projected/5cc410de-5b42-44d1-8b29-37161475730e-kube-api-access-g8wvt\") on node \"crc\" DevicePath \"\"" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.051478 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.051497 4778 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-nova-cell1-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.051518 4778 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.051535 4778 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-custom-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-neutron-metadata-custom-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.051549 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5cc410de-5b42-44d1-8b29-37161475730e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.401024 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" event={"ID":"5cc410de-5b42-44d1-8b29-37161475730e","Type":"ContainerDied","Data":"df0c491452d065b345698565a278b9fb265300119cb4f52d617e573795f6237b"} Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.401070 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df0c491452d065b345698565a278b9fb265300119cb4f52d617e573795f6237b" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.401146 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.583800 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8"] Mar 12 13:50:53 crc kubenswrapper[4778]: E0312 13:50:53.584221 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc410de-5b42-44d1-8b29-37161475730e" containerName="neutron-metadata-custom-edpm-deployment-openstack-edpm-ipam" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.584237 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc410de-5b42-44d1-8b29-37161475730e" containerName="neutron-metadata-custom-edpm-deployment-openstack-edpm-ipam" Mar 12 13:50:53 crc kubenswrapper[4778]: E0312 13:50:53.584270 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b911b49-7b0f-48ef-9626-cd43d308d596" containerName="oc" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.584278 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b911b49-7b0f-48ef-9626-cd43d308d596" containerName="oc" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.584456 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc410de-5b42-44d1-8b29-37161475730e" containerName="neutron-metadata-custom-edpm-deployment-openstack-edpm-ipam" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.584476 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b911b49-7b0f-48ef-9626-cd43d308d596" containerName="oc" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.585066 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.587903 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.588054 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.588121 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.589161 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.591065 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.624228 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8"] Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.763365 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.763427 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.763700 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.763769 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42btt\" (UniqueName: \"kubernetes.io/projected/8713b951-b516-42bd-9286-4343e5bcc955-kube-api-access-42btt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.763803 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.868379 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.868450 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.868531 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.868555 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42btt\" (UniqueName: \"kubernetes.io/projected/8713b951-b516-42bd-9286-4343e5bcc955-kube-api-access-42btt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.868578 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.873451 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.873901 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.877032 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.882977 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.893893 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42btt\" (UniqueName: \"kubernetes.io/projected/8713b951-b516-42bd-9286-4343e5bcc955-kube-api-access-42btt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:53 crc kubenswrapper[4778]: I0312 13:50:53.907570 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:50:54 crc kubenswrapper[4778]: I0312 13:50:54.450962 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8"] Mar 12 13:50:55 crc kubenswrapper[4778]: I0312 13:50:55.420237 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" event={"ID":"8713b951-b516-42bd-9286-4343e5bcc955","Type":"ContainerStarted","Data":"a4247422875ccc1a942b0fe9bbe7105ae4cc94d3cc420523932f239263b637f2"} Mar 12 13:50:55 crc kubenswrapper[4778]: I0312 13:50:55.420824 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" event={"ID":"8713b951-b516-42bd-9286-4343e5bcc955","Type":"ContainerStarted","Data":"cceb4d2cb4de9b629a148def19221cfb5724a46254ab30fcc858bc4f7a667a5f"} Mar 12 13:50:55 crc kubenswrapper[4778]: I0312 13:50:55.451022 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" podStartSLOduration=1.968408094 podStartE2EDuration="2.450997943s" podCreationTimestamp="2026-03-12 13:50:53 +0000 UTC" firstStartedPulling="2026-03-12 13:50:54.452425869 +0000 UTC m=+2472.901121265" lastFinishedPulling="2026-03-12 13:50:54.935015718 +0000 UTC m=+2473.383711114" observedRunningTime="2026-03-12 13:50:55.440501765 +0000 UTC m=+2473.889197171" watchObservedRunningTime="2026-03-12 13:50:55.450997943 +0000 UTC m=+2473.899693339" Mar 12 13:50:58 crc kubenswrapper[4778]: I0312 13:50:58.254522 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:50:58 crc kubenswrapper[4778]: E0312 13:50:58.255053 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:51:09 crc kubenswrapper[4778]: I0312 13:51:09.254114 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:51:09 crc kubenswrapper[4778]: E0312 13:51:09.256764 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:51:23 crc kubenswrapper[4778]: I0312 13:51:23.253920 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:51:23 crc kubenswrapper[4778]: E0312 13:51:23.254706 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:51:38 crc kubenswrapper[4778]: I0312 13:51:38.253783 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:51:38 crc kubenswrapper[4778]: E0312 13:51:38.255564 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:51:50 crc kubenswrapper[4778]: I0312 13:51:50.274388 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:51:50 crc kubenswrapper[4778]: E0312 13:51:50.275296 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.176490 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555392-wg78w"] Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.179136 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555392-wg78w" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.181618 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.182497 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.183945 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.197161 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqkg\" (UniqueName: \"kubernetes.io/projected/0a0b5070-03d8-45fe-8148-c39a9b560fbb-kube-api-access-hjqkg\") pod \"auto-csr-approver-29555392-wg78w\" (UID: \"0a0b5070-03d8-45fe-8148-c39a9b560fbb\") " pod="openshift-infra/auto-csr-approver-29555392-wg78w" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.200394 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555392-wg78w"] Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.298974 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqkg\" (UniqueName: \"kubernetes.io/projected/0a0b5070-03d8-45fe-8148-c39a9b560fbb-kube-api-access-hjqkg\") pod \"auto-csr-approver-29555392-wg78w\" (UID: \"0a0b5070-03d8-45fe-8148-c39a9b560fbb\") " pod="openshift-infra/auto-csr-approver-29555392-wg78w" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.323907 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqkg\" (UniqueName: \"kubernetes.io/projected/0a0b5070-03d8-45fe-8148-c39a9b560fbb-kube-api-access-hjqkg\") pod \"auto-csr-approver-29555392-wg78w\" (UID: \"0a0b5070-03d8-45fe-8148-c39a9b560fbb\") " pod="openshift-infra/auto-csr-approver-29555392-wg78w" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.503749 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555392-wg78w" Mar 12 13:52:00 crc kubenswrapper[4778]: I0312 13:52:00.952958 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555392-wg78w"] Mar 12 13:52:01 crc kubenswrapper[4778]: I0312 13:52:01.120319 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555392-wg78w" event={"ID":"0a0b5070-03d8-45fe-8148-c39a9b560fbb","Type":"ContainerStarted","Data":"754978825e789f510db4cf26367293fc9868be6072bc39dc3e4bce6fab2beb06"} Mar 12 13:52:04 crc kubenswrapper[4778]: I0312 13:52:04.254271 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:52:04 crc kubenswrapper[4778]: E0312 13:52:04.255057 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:52:05 crc kubenswrapper[4778]: I0312 13:52:05.184531 4778 generic.go:334] "Generic (PLEG): container finished" podID="0a0b5070-03d8-45fe-8148-c39a9b560fbb" containerID="1f9b06fe647c9c9d52674fc3e58e1c9d5c930036da2b4f235a350fc83217496f" exitCode=0 Mar 12 13:52:05 crc kubenswrapper[4778]: I0312 13:52:05.184818 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555392-wg78w" event={"ID":"0a0b5070-03d8-45fe-8148-c39a9b560fbb","Type":"ContainerDied","Data":"1f9b06fe647c9c9d52674fc3e58e1c9d5c930036da2b4f235a350fc83217496f"} Mar 12 13:52:06 crc kubenswrapper[4778]: I0312 13:52:06.505048 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555392-wg78w" Mar 12 13:52:06 crc kubenswrapper[4778]: I0312 13:52:06.648350 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjqkg\" (UniqueName: \"kubernetes.io/projected/0a0b5070-03d8-45fe-8148-c39a9b560fbb-kube-api-access-hjqkg\") pod \"0a0b5070-03d8-45fe-8148-c39a9b560fbb\" (UID: \"0a0b5070-03d8-45fe-8148-c39a9b560fbb\") " Mar 12 13:52:06 crc kubenswrapper[4778]: I0312 13:52:06.657216 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0b5070-03d8-45fe-8148-c39a9b560fbb-kube-api-access-hjqkg" (OuterVolumeSpecName: "kube-api-access-hjqkg") pod "0a0b5070-03d8-45fe-8148-c39a9b560fbb" (UID: "0a0b5070-03d8-45fe-8148-c39a9b560fbb"). InnerVolumeSpecName "kube-api-access-hjqkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:52:06 crc kubenswrapper[4778]: I0312 13:52:06.750843 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjqkg\" (UniqueName: \"kubernetes.io/projected/0a0b5070-03d8-45fe-8148-c39a9b560fbb-kube-api-access-hjqkg\") on node \"crc\" DevicePath \"\"" Mar 12 13:52:07 crc kubenswrapper[4778]: I0312 13:52:07.202568 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555392-wg78w" event={"ID":"0a0b5070-03d8-45fe-8148-c39a9b560fbb","Type":"ContainerDied","Data":"754978825e789f510db4cf26367293fc9868be6072bc39dc3e4bce6fab2beb06"} Mar 12 13:52:07 crc kubenswrapper[4778]: I0312 13:52:07.202632 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="754978825e789f510db4cf26367293fc9868be6072bc39dc3e4bce6fab2beb06" Mar 12 13:52:07 crc kubenswrapper[4778]: I0312 13:52:07.202642 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555392-wg78w" Mar 12 13:52:07 crc kubenswrapper[4778]: I0312 13:52:07.580305 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555386-vjswk"] Mar 12 13:52:07 crc kubenswrapper[4778]: I0312 13:52:07.588654 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555386-vjswk"] Mar 12 13:52:08 crc kubenswrapper[4778]: I0312 13:52:08.267583 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6" path="/var/lib/kubelet/pods/f55c85e9-4cb7-4ac4-bc3d-c37217b4abf6/volumes" Mar 12 13:52:15 crc kubenswrapper[4778]: I0312 13:52:15.253902 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:52:15 crc kubenswrapper[4778]: E0312 13:52:15.254769 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:52:19 crc kubenswrapper[4778]: I0312 13:52:19.713797 4778 scope.go:117] "RemoveContainer" containerID="2e424e585231dad361491fa12a9a787f83d6973879b6b45159764198bbcf5877" Mar 12 13:52:26 crc kubenswrapper[4778]: I0312 13:52:26.253808 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:52:26 crc kubenswrapper[4778]: E0312 13:52:26.254534 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:52:40 crc kubenswrapper[4778]: I0312 13:52:40.254327 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:52:40 crc kubenswrapper[4778]: E0312 13:52:40.255473 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:52:54 crc kubenswrapper[4778]: I0312 13:52:54.253724 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:52:54 crc kubenswrapper[4778]: E0312 13:52:54.254434 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:53:09 crc kubenswrapper[4778]: I0312 13:53:09.253986 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:53:09 crc kubenswrapper[4778]: I0312 13:53:09.738404 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"e20e6fa2d381e3ff917a0f6074e27521c909a7932045eacfc15c005ed843cb93"} Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.150542 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555394-7f7nf"] Mar 12 13:54:00 crc kubenswrapper[4778]: E0312 13:54:00.151578 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0b5070-03d8-45fe-8148-c39a9b560fbb" containerName="oc" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.151594 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0b5070-03d8-45fe-8148-c39a9b560fbb" containerName="oc" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.151837 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0b5070-03d8-45fe-8148-c39a9b560fbb" containerName="oc" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.152634 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555394-7f7nf" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.154694 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.154753 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.155420 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.162292 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555394-7f7nf"] Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.220290 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9dv\" (UniqueName: \"kubernetes.io/projected/65fbb68a-57a2-40bf-9149-6cfe13fe147c-kube-api-access-ww9dv\") pod \"auto-csr-approver-29555394-7f7nf\" (UID: \"65fbb68a-57a2-40bf-9149-6cfe13fe147c\") " pod="openshift-infra/auto-csr-approver-29555394-7f7nf" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.322799 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww9dv\" (UniqueName: \"kubernetes.io/projected/65fbb68a-57a2-40bf-9149-6cfe13fe147c-kube-api-access-ww9dv\") pod \"auto-csr-approver-29555394-7f7nf\" (UID: \"65fbb68a-57a2-40bf-9149-6cfe13fe147c\") " pod="openshift-infra/auto-csr-approver-29555394-7f7nf" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.349125 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww9dv\" (UniqueName: \"kubernetes.io/projected/65fbb68a-57a2-40bf-9149-6cfe13fe147c-kube-api-access-ww9dv\") pod \"auto-csr-approver-29555394-7f7nf\" (UID: \"65fbb68a-57a2-40bf-9149-6cfe13fe147c\") " pod="openshift-infra/auto-csr-approver-29555394-7f7nf" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.479548 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555394-7f7nf" Mar 12 13:54:00 crc kubenswrapper[4778]: I0312 13:54:00.932120 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555394-7f7nf"] Mar 12 13:54:00 crc kubenswrapper[4778]: W0312 13:54:00.938485 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65fbb68a_57a2_40bf_9149_6cfe13fe147c.slice/crio-0cb9aa3c58c77f6af33863cdd5012dbf15ab7d672bd63a4b37baf0edea4c0df3 WatchSource:0}: Error finding container 0cb9aa3c58c77f6af33863cdd5012dbf15ab7d672bd63a4b37baf0edea4c0df3: Status 404 returned error can't find the container with id 0cb9aa3c58c77f6af33863cdd5012dbf15ab7d672bd63a4b37baf0edea4c0df3 Mar 12 13:54:01 crc kubenswrapper[4778]: I0312 13:54:01.172423 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555394-7f7nf" event={"ID":"65fbb68a-57a2-40bf-9149-6cfe13fe147c","Type":"ContainerStarted","Data":"0cb9aa3c58c77f6af33863cdd5012dbf15ab7d672bd63a4b37baf0edea4c0df3"} Mar 12 13:54:03 crc kubenswrapper[4778]: I0312 13:54:03.192373 4778 generic.go:334] "Generic (PLEG): container finished" podID="65fbb68a-57a2-40bf-9149-6cfe13fe147c" containerID="b93a8a130b5f9b7d0852157c6942677a4b8f445ae1cc7062b429977ab9491779" exitCode=0 Mar 12 13:54:03 crc kubenswrapper[4778]: I0312 13:54:03.192449 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555394-7f7nf" event={"ID":"65fbb68a-57a2-40bf-9149-6cfe13fe147c","Type":"ContainerDied","Data":"b93a8a130b5f9b7d0852157c6942677a4b8f445ae1cc7062b429977ab9491779"} Mar 12 13:54:04 crc kubenswrapper[4778]: I0312 13:54:04.560559 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555394-7f7nf" Mar 12 13:54:04 crc kubenswrapper[4778]: I0312 13:54:04.706091 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww9dv\" (UniqueName: \"kubernetes.io/projected/65fbb68a-57a2-40bf-9149-6cfe13fe147c-kube-api-access-ww9dv\") pod \"65fbb68a-57a2-40bf-9149-6cfe13fe147c\" (UID: \"65fbb68a-57a2-40bf-9149-6cfe13fe147c\") " Mar 12 13:54:04 crc kubenswrapper[4778]: I0312 13:54:04.711896 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65fbb68a-57a2-40bf-9149-6cfe13fe147c-kube-api-access-ww9dv" (OuterVolumeSpecName: "kube-api-access-ww9dv") pod "65fbb68a-57a2-40bf-9149-6cfe13fe147c" (UID: "65fbb68a-57a2-40bf-9149-6cfe13fe147c"). InnerVolumeSpecName "kube-api-access-ww9dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:54:04 crc kubenswrapper[4778]: I0312 13:54:04.808306 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww9dv\" (UniqueName: \"kubernetes.io/projected/65fbb68a-57a2-40bf-9149-6cfe13fe147c-kube-api-access-ww9dv\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:05 crc kubenswrapper[4778]: I0312 13:54:05.214033 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555394-7f7nf" event={"ID":"65fbb68a-57a2-40bf-9149-6cfe13fe147c","Type":"ContainerDied","Data":"0cb9aa3c58c77f6af33863cdd5012dbf15ab7d672bd63a4b37baf0edea4c0df3"} Mar 12 13:54:05 crc kubenswrapper[4778]: I0312 13:54:05.214581 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb9aa3c58c77f6af33863cdd5012dbf15ab7d672bd63a4b37baf0edea4c0df3" Mar 12 13:54:05 crc kubenswrapper[4778]: I0312 13:54:05.214094 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555394-7f7nf" Mar 12 13:54:05 crc kubenswrapper[4778]: I0312 13:54:05.630875 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555388-5mnjh"] Mar 12 13:54:05 crc kubenswrapper[4778]: I0312 13:54:05.638892 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555388-5mnjh"] Mar 12 13:54:06 crc kubenswrapper[4778]: I0312 13:54:06.265742 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="110071e6-5231-434c-af16-87b68a3d0c8f" path="/var/lib/kubelet/pods/110071e6-5231-434c-af16-87b68a3d0c8f/volumes" Mar 12 13:54:19 crc kubenswrapper[4778]: I0312 13:54:19.846047 4778 scope.go:117] "RemoveContainer" containerID="2b6df075041c6c1583e329716aacbd0c53d1a64cadc9905cc6ddb1e1bd9b676d" Mar 12 13:54:40 crc kubenswrapper[4778]: I0312 13:54:40.972078 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dr7tv"] Mar 12 13:54:40 crc kubenswrapper[4778]: E0312 13:54:40.973327 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65fbb68a-57a2-40bf-9149-6cfe13fe147c" containerName="oc" Mar 12 13:54:40 crc kubenswrapper[4778]: I0312 13:54:40.973355 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="65fbb68a-57a2-40bf-9149-6cfe13fe147c" containerName="oc" Mar 12 13:54:40 crc kubenswrapper[4778]: I0312 13:54:40.973612 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="65fbb68a-57a2-40bf-9149-6cfe13fe147c" containerName="oc" Mar 12 13:54:40 crc kubenswrapper[4778]: I0312 13:54:40.975556 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:40 crc kubenswrapper[4778]: I0312 13:54:40.985126 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dr7tv"] Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.151666 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxdhb\" (UniqueName: \"kubernetes.io/projected/2175642e-200d-49e4-b07c-e594a50dec28-kube-api-access-zxdhb\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.151883 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-utilities\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.151923 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-catalog-content\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.254069 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxdhb\" (UniqueName: \"kubernetes.io/projected/2175642e-200d-49e4-b07c-e594a50dec28-kube-api-access-zxdhb\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.254279 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-utilities\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.254314 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-catalog-content\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.254848 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-catalog-content\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.254859 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-utilities\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.283337 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxdhb\" (UniqueName: \"kubernetes.io/projected/2175642e-200d-49e4-b07c-e594a50dec28-kube-api-access-zxdhb\") pod \"certified-operators-dr7tv\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.298996 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:41 crc kubenswrapper[4778]: I0312 13:54:41.972821 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dr7tv"] Mar 12 13:54:42 crc kubenswrapper[4778]: I0312 13:54:42.527683 4778 generic.go:334] "Generic (PLEG): container finished" podID="2175642e-200d-49e4-b07c-e594a50dec28" containerID="78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3" exitCode=0 Mar 12 13:54:42 crc kubenswrapper[4778]: I0312 13:54:42.527746 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr7tv" event={"ID":"2175642e-200d-49e4-b07c-e594a50dec28","Type":"ContainerDied","Data":"78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3"} Mar 12 13:54:42 crc kubenswrapper[4778]: I0312 13:54:42.527932 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr7tv" event={"ID":"2175642e-200d-49e4-b07c-e594a50dec28","Type":"ContainerStarted","Data":"4b66228125468deb0c32f8bf58c76651899d1e61cf9f3d4b448151fde53f1bc3"} Mar 12 13:54:43 crc kubenswrapper[4778]: I0312 13:54:43.546209 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr7tv" event={"ID":"2175642e-200d-49e4-b07c-e594a50dec28","Type":"ContainerStarted","Data":"a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46"} Mar 12 13:54:44 crc kubenswrapper[4778]: I0312 13:54:44.563529 4778 generic.go:334] "Generic (PLEG): container finished" podID="2175642e-200d-49e4-b07c-e594a50dec28" containerID="a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46" exitCode=0 Mar 12 13:54:44 crc kubenswrapper[4778]: I0312 13:54:44.563581 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr7tv" event={"ID":"2175642e-200d-49e4-b07c-e594a50dec28","Type":"ContainerDied","Data":"a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46"} Mar 12 13:54:45 crc kubenswrapper[4778]: I0312 13:54:45.573826 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr7tv" event={"ID":"2175642e-200d-49e4-b07c-e594a50dec28","Type":"ContainerStarted","Data":"0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70"} Mar 12 13:54:45 crc kubenswrapper[4778]: I0312 13:54:45.603635 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dr7tv" podStartSLOduration=3.132173988 podStartE2EDuration="5.603611248s" podCreationTimestamp="2026-03-12 13:54:40 +0000 UTC" firstStartedPulling="2026-03-12 13:54:42.529545341 +0000 UTC m=+2700.978240737" lastFinishedPulling="2026-03-12 13:54:45.000982581 +0000 UTC m=+2703.449677997" observedRunningTime="2026-03-12 13:54:45.597212816 +0000 UTC m=+2704.045908232" watchObservedRunningTime="2026-03-12 13:54:45.603611248 +0000 UTC m=+2704.052306644" Mar 12 13:54:46 crc kubenswrapper[4778]: I0312 13:54:46.952011 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gq2rs"] Mar 12 13:54:46 crc kubenswrapper[4778]: I0312 13:54:46.954558 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:46 crc kubenswrapper[4778]: I0312 13:54:46.970357 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq2rs"] Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.060930 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-catalog-content\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.061053 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-utilities\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.061082 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt2rb\" (UniqueName: \"kubernetes.io/projected/1408ca57-ea2c-414f-93af-8c6b930c1fe4-kube-api-access-xt2rb\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.162944 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-catalog-content\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.163067 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-utilities\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.163093 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt2rb\" (UniqueName: \"kubernetes.io/projected/1408ca57-ea2c-414f-93af-8c6b930c1fe4-kube-api-access-xt2rb\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.163587 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-catalog-content\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.163623 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-utilities\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.186227 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt2rb\" (UniqueName: \"kubernetes.io/projected/1408ca57-ea2c-414f-93af-8c6b930c1fe4-kube-api-access-xt2rb\") pod \"community-operators-gq2rs\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.302032 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:47 crc kubenswrapper[4778]: I0312 13:54:47.874743 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gq2rs"] Mar 12 13:54:47 crc kubenswrapper[4778]: W0312 13:54:47.875724 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1408ca57_ea2c_414f_93af_8c6b930c1fe4.slice/crio-e3dfe06f339548a2b7f51a541535fbe107a48486dc81c829d0a16d0d9b1044de WatchSource:0}: Error finding container e3dfe06f339548a2b7f51a541535fbe107a48486dc81c829d0a16d0d9b1044de: Status 404 returned error can't find the container with id e3dfe06f339548a2b7f51a541535fbe107a48486dc81c829d0a16d0d9b1044de Mar 12 13:54:48 crc kubenswrapper[4778]: I0312 13:54:48.600650 4778 generic.go:334] "Generic (PLEG): container finished" podID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerID="bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a" exitCode=0 Mar 12 13:54:48 crc kubenswrapper[4778]: I0312 13:54:48.601070 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq2rs" event={"ID":"1408ca57-ea2c-414f-93af-8c6b930c1fe4","Type":"ContainerDied","Data":"bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a"} Mar 12 13:54:48 crc kubenswrapper[4778]: I0312 13:54:48.601100 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq2rs" event={"ID":"1408ca57-ea2c-414f-93af-8c6b930c1fe4","Type":"ContainerStarted","Data":"e3dfe06f339548a2b7f51a541535fbe107a48486dc81c829d0a16d0d9b1044de"} Mar 12 13:54:48 crc kubenswrapper[4778]: I0312 13:54:48.617162 4778 generic.go:334] "Generic (PLEG): container finished" podID="8713b951-b516-42bd-9286-4343e5bcc955" containerID="a4247422875ccc1a942b0fe9bbe7105ae4cc94d3cc420523932f239263b637f2" exitCode=0 Mar 12 13:54:48 crc kubenswrapper[4778]: I0312 13:54:48.617287 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" event={"ID":"8713b951-b516-42bd-9286-4343e5bcc955","Type":"ContainerDied","Data":"a4247422875ccc1a942b0fe9bbe7105ae4cc94d3cc420523932f239263b637f2"} Mar 12 13:54:49 crc kubenswrapper[4778]: I0312 13:54:49.627773 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq2rs" event={"ID":"1408ca57-ea2c-414f-93af-8c6b930c1fe4","Type":"ContainerStarted","Data":"da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c"} Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.073283 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.223653 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42btt\" (UniqueName: \"kubernetes.io/projected/8713b951-b516-42bd-9286-4343e5bcc955-kube-api-access-42btt\") pod \"8713b951-b516-42bd-9286-4343e5bcc955\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.223969 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-inventory\") pod \"8713b951-b516-42bd-9286-4343e5bcc955\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.224031 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-ssh-key-openstack-edpm-ipam\") pod \"8713b951-b516-42bd-9286-4343e5bcc955\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.224065 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-combined-ca-bundle\") pod \"8713b951-b516-42bd-9286-4343e5bcc955\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.224146 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-secret-0\") pod \"8713b951-b516-42bd-9286-4343e5bcc955\" (UID: \"8713b951-b516-42bd-9286-4343e5bcc955\") " Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.228973 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8713b951-b516-42bd-9286-4343e5bcc955" (UID: "8713b951-b516-42bd-9286-4343e5bcc955"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.237683 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8713b951-b516-42bd-9286-4343e5bcc955-kube-api-access-42btt" (OuterVolumeSpecName: "kube-api-access-42btt") pod "8713b951-b516-42bd-9286-4343e5bcc955" (UID: "8713b951-b516-42bd-9286-4343e5bcc955"). InnerVolumeSpecName "kube-api-access-42btt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.252379 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8713b951-b516-42bd-9286-4343e5bcc955" (UID: "8713b951-b516-42bd-9286-4343e5bcc955"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.254366 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-inventory" (OuterVolumeSpecName: "inventory") pod "8713b951-b516-42bd-9286-4343e5bcc955" (UID: "8713b951-b516-42bd-9286-4343e5bcc955"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.268377 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8713b951-b516-42bd-9286-4343e5bcc955" (UID: "8713b951-b516-42bd-9286-4343e5bcc955"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.326502 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.326542 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.326555 4778 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.326565 4778 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8713b951-b516-42bd-9286-4343e5bcc955-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.326573 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42btt\" (UniqueName: \"kubernetes.io/projected/8713b951-b516-42bd-9286-4343e5bcc955-kube-api-access-42btt\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.636259 4778 generic.go:334] "Generic (PLEG): container finished" podID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerID="da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c" exitCode=0 Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.636301 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq2rs" event={"ID":"1408ca57-ea2c-414f-93af-8c6b930c1fe4","Type":"ContainerDied","Data":"da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c"} Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.637503 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" event={"ID":"8713b951-b516-42bd-9286-4343e5bcc955","Type":"ContainerDied","Data":"cceb4d2cb4de9b629a148def19221cfb5724a46254ab30fcc858bc4f7a667a5f"} Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.637536 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cceb4d2cb4de9b629a148def19221cfb5724a46254ab30fcc858bc4f7a667a5f" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.637553 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.739614 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s"] Mar 12 13:54:50 crc kubenswrapper[4778]: E0312 13:54:50.740050 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8713b951-b516-42bd-9286-4343e5bcc955" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.740066 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8713b951-b516-42bd-9286-4343e5bcc955" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.740248 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="8713b951-b516-42bd-9286-4343e5bcc955" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.740881 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.744654 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.744685 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.744781 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.745276 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.745320 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.746808 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.749876 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.757593 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s"] Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.835238 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.835552 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.835660 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.835854 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.835969 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.836070 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.836156 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.836397 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.836496 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.836533 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6ls\" (UniqueName: \"kubernetes.io/projected/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-kube-api-access-fp6ls\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.836593 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.938722 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939423 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939457 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939473 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939500 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939526 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939546 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6ls\" (UniqueName: \"kubernetes.io/projected/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-kube-api-access-fp6ls\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939579 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939611 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939668 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.939689 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.940733 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.943728 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.944252 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.945742 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.946935 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.947243 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.949641 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.950833 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.952505 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.961601 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:50 crc kubenswrapper[4778]: I0312 13:54:50.963334 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6ls\" (UniqueName: \"kubernetes.io/projected/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-kube-api-access-fp6ls\") pod \"nova-edpm-deployment-openstack-edpm-ipam-5tw6s\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.062245 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.299660 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.299704 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.366475 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.582668 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s"] Mar 12 13:54:51 crc kubenswrapper[4778]: W0312 13:54:51.587638 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ed77f87_e6b2_4c7a_8b0e_003106200dc8.slice/crio-1617b96c98df28869b5a069f5b74bb8126ce4a98898565a1251dadc01020d162 WatchSource:0}: Error finding container 1617b96c98df28869b5a069f5b74bb8126ce4a98898565a1251dadc01020d162: Status 404 returned error can't find the container with id 1617b96c98df28869b5a069f5b74bb8126ce4a98898565a1251dadc01020d162 Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.648240 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq2rs" event={"ID":"1408ca57-ea2c-414f-93af-8c6b930c1fe4","Type":"ContainerStarted","Data":"11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00"} Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.649949 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" event={"ID":"6ed77f87-e6b2-4c7a-8b0e-003106200dc8","Type":"ContainerStarted","Data":"1617b96c98df28869b5a069f5b74bb8126ce4a98898565a1251dadc01020d162"} Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.671046 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gq2rs" podStartSLOduration=2.916142089 podStartE2EDuration="5.67102827s" podCreationTimestamp="2026-03-12 13:54:46 +0000 UTC" firstStartedPulling="2026-03-12 13:54:48.609215401 +0000 UTC m=+2707.057910797" lastFinishedPulling="2026-03-12 13:54:51.364101582 +0000 UTC m=+2709.812796978" observedRunningTime="2026-03-12 13:54:51.666291926 +0000 UTC m=+2710.114987322" watchObservedRunningTime="2026-03-12 13:54:51.67102827 +0000 UTC m=+2710.119723666" Mar 12 13:54:51 crc kubenswrapper[4778]: I0312 13:54:51.699520 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:52 crc kubenswrapper[4778]: I0312 13:54:52.674312 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" event={"ID":"6ed77f87-e6b2-4c7a-8b0e-003106200dc8","Type":"ContainerStarted","Data":"e2a35e751ce79cb5226fd46ca73472f5cd7c47201c7bf749d5ffd3dae25fcc72"} Mar 12 13:54:52 crc kubenswrapper[4778]: I0312 13:54:52.696858 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" podStartSLOduration=1.943705015 podStartE2EDuration="2.696837788s" podCreationTimestamp="2026-03-12 13:54:50 +0000 UTC" firstStartedPulling="2026-03-12 13:54:51.592638143 +0000 UTC m=+2710.041333539" lastFinishedPulling="2026-03-12 13:54:52.345770916 +0000 UTC m=+2710.794466312" observedRunningTime="2026-03-12 13:54:52.695590843 +0000 UTC m=+2711.144286259" watchObservedRunningTime="2026-03-12 13:54:52.696837788 +0000 UTC m=+2711.145533194" Mar 12 13:54:53 crc kubenswrapper[4778]: I0312 13:54:53.742036 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dr7tv"] Mar 12 13:54:53 crc kubenswrapper[4778]: I0312 13:54:53.742302 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dr7tv" podUID="2175642e-200d-49e4-b07c-e594a50dec28" containerName="registry-server" containerID="cri-o://0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70" gracePeriod=2 Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.680395 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.712537 4778 generic.go:334] "Generic (PLEG): container finished" podID="2175642e-200d-49e4-b07c-e594a50dec28" containerID="0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70" exitCode=0 Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.712931 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr7tv" event={"ID":"2175642e-200d-49e4-b07c-e594a50dec28","Type":"ContainerDied","Data":"0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70"} Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.712981 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dr7tv" event={"ID":"2175642e-200d-49e4-b07c-e594a50dec28","Type":"ContainerDied","Data":"4b66228125468deb0c32f8bf58c76651899d1e61cf9f3d4b448151fde53f1bc3"} Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.713035 4778 scope.go:117] "RemoveContainer" containerID="0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.713414 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dr7tv" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.755147 4778 scope.go:117] "RemoveContainer" containerID="a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.774590 4778 scope.go:117] "RemoveContainer" containerID="78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.819948 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxdhb\" (UniqueName: \"kubernetes.io/projected/2175642e-200d-49e4-b07c-e594a50dec28-kube-api-access-zxdhb\") pod \"2175642e-200d-49e4-b07c-e594a50dec28\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.820145 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-utilities\") pod \"2175642e-200d-49e4-b07c-e594a50dec28\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.820257 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-catalog-content\") pod \"2175642e-200d-49e4-b07c-e594a50dec28\" (UID: \"2175642e-200d-49e4-b07c-e594a50dec28\") " Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.823158 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-utilities" (OuterVolumeSpecName: "utilities") pod "2175642e-200d-49e4-b07c-e594a50dec28" (UID: "2175642e-200d-49e4-b07c-e594a50dec28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.829734 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2175642e-200d-49e4-b07c-e594a50dec28-kube-api-access-zxdhb" (OuterVolumeSpecName: "kube-api-access-zxdhb") pod "2175642e-200d-49e4-b07c-e594a50dec28" (UID: "2175642e-200d-49e4-b07c-e594a50dec28"). InnerVolumeSpecName "kube-api-access-zxdhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.833625 4778 scope.go:117] "RemoveContainer" containerID="0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70" Mar 12 13:54:54 crc kubenswrapper[4778]: E0312 13:54:54.834287 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70\": container with ID starting with 0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70 not found: ID does not exist" containerID="0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.834339 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70"} err="failed to get container status \"0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70\": rpc error: code = NotFound desc = could not find container \"0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70\": container with ID starting with 0205168eb943737b001c1df62f66ef855d9c4f763c3adaea0c470e9c1ddc2f70 not found: ID does not exist" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.834369 4778 scope.go:117] "RemoveContainer" containerID="a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46" Mar 12 13:54:54 crc kubenswrapper[4778]: E0312 13:54:54.834863 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46\": container with ID starting with a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46 not found: ID does not exist" containerID="a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.834916 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46"} err="failed to get container status \"a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46\": rpc error: code = NotFound desc = could not find container \"a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46\": container with ID starting with a9205572a7471ec7794f4549cb1ca055cf820789a0c4982c3e93ca87112c9e46 not found: ID does not exist" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.834949 4778 scope.go:117] "RemoveContainer" containerID="78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3" Mar 12 13:54:54 crc kubenswrapper[4778]: E0312 13:54:54.835466 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3\": container with ID starting with 78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3 not found: ID does not exist" containerID="78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.835505 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3"} err="failed to get container status \"78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3\": rpc error: code = NotFound desc = could not find container \"78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3\": container with ID starting with 78f31326cd9cdc9aaf40af7d920b0b345a573b27f1b4f379a64419fc772025c3 not found: ID does not exist" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.894592 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2175642e-200d-49e4-b07c-e594a50dec28" (UID: "2175642e-200d-49e4-b07c-e594a50dec28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.923819 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.923858 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2175642e-200d-49e4-b07c-e594a50dec28-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:54 crc kubenswrapper[4778]: I0312 13:54:54.923871 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxdhb\" (UniqueName: \"kubernetes.io/projected/2175642e-200d-49e4-b07c-e594a50dec28-kube-api-access-zxdhb\") on node \"crc\" DevicePath \"\"" Mar 12 13:54:55 crc kubenswrapper[4778]: I0312 13:54:55.052976 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dr7tv"] Mar 12 13:54:55 crc kubenswrapper[4778]: I0312 13:54:55.061288 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dr7tv"] Mar 12 13:54:56 crc kubenswrapper[4778]: I0312 13:54:56.271699 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2175642e-200d-49e4-b07c-e594a50dec28" path="/var/lib/kubelet/pods/2175642e-200d-49e4-b07c-e594a50dec28/volumes" Mar 12 13:54:57 crc kubenswrapper[4778]: I0312 13:54:57.302801 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:57 crc kubenswrapper[4778]: I0312 13:54:57.303138 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:57 crc kubenswrapper[4778]: I0312 13:54:57.357548 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:57 crc kubenswrapper[4778]: I0312 13:54:57.815452 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:54:58 crc kubenswrapper[4778]: I0312 13:54:58.751490 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq2rs"] Mar 12 13:54:59 crc kubenswrapper[4778]: I0312 13:54:59.767562 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gq2rs" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerName="registry-server" containerID="cri-o://11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00" gracePeriod=2 Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.244945 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.338918 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt2rb\" (UniqueName: \"kubernetes.io/projected/1408ca57-ea2c-414f-93af-8c6b930c1fe4-kube-api-access-xt2rb\") pod \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.339242 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-utilities\") pod \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.339311 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-catalog-content\") pod \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\" (UID: \"1408ca57-ea2c-414f-93af-8c6b930c1fe4\") " Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.340308 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-utilities" (OuterVolumeSpecName: "utilities") pod "1408ca57-ea2c-414f-93af-8c6b930c1fe4" (UID: "1408ca57-ea2c-414f-93af-8c6b930c1fe4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.344958 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1408ca57-ea2c-414f-93af-8c6b930c1fe4-kube-api-access-xt2rb" (OuterVolumeSpecName: "kube-api-access-xt2rb") pod "1408ca57-ea2c-414f-93af-8c6b930c1fe4" (UID: "1408ca57-ea2c-414f-93af-8c6b930c1fe4"). InnerVolumeSpecName "kube-api-access-xt2rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.442008 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.442051 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt2rb\" (UniqueName: \"kubernetes.io/projected/1408ca57-ea2c-414f-93af-8c6b930c1fe4-kube-api-access-xt2rb\") on node \"crc\" DevicePath \"\"" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.466279 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1408ca57-ea2c-414f-93af-8c6b930c1fe4" (UID: "1408ca57-ea2c-414f-93af-8c6b930c1fe4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.545259 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1408ca57-ea2c-414f-93af-8c6b930c1fe4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.777515 4778 generic.go:334] "Generic (PLEG): container finished" podID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerID="11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00" exitCode=0 Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.777557 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq2rs" event={"ID":"1408ca57-ea2c-414f-93af-8c6b930c1fe4","Type":"ContainerDied","Data":"11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00"} Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.777582 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gq2rs" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.777607 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gq2rs" event={"ID":"1408ca57-ea2c-414f-93af-8c6b930c1fe4","Type":"ContainerDied","Data":"e3dfe06f339548a2b7f51a541535fbe107a48486dc81c829d0a16d0d9b1044de"} Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.777630 4778 scope.go:117] "RemoveContainer" containerID="11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.821144 4778 scope.go:117] "RemoveContainer" containerID="da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.823749 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gq2rs"] Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.832047 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gq2rs"] Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.845906 4778 scope.go:117] "RemoveContainer" containerID="bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.885714 4778 scope.go:117] "RemoveContainer" containerID="11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00" Mar 12 13:55:00 crc kubenswrapper[4778]: E0312 13:55:00.886159 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00\": container with ID starting with 11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00 not found: ID does not exist" containerID="11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.886313 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00"} err="failed to get container status \"11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00\": rpc error: code = NotFound desc = could not find container \"11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00\": container with ID starting with 11a3098d454804d45f104a9f35c5a7438c4227caaaef5bf8fde9f94039e4ae00 not found: ID does not exist" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.886340 4778 scope.go:117] "RemoveContainer" containerID="da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c" Mar 12 13:55:00 crc kubenswrapper[4778]: E0312 13:55:00.886660 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c\": container with ID starting with da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c not found: ID does not exist" containerID="da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.886711 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c"} err="failed to get container status \"da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c\": rpc error: code = NotFound desc = could not find container \"da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c\": container with ID starting with da490793edbc377cb06911d757533645c04fa6ea49dc72e732862ebbe646049c not found: ID does not exist" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.886737 4778 scope.go:117] "RemoveContainer" containerID="bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a" Mar 12 13:55:00 crc kubenswrapper[4778]: E0312 13:55:00.887006 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a\": container with ID starting with bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a not found: ID does not exist" containerID="bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a" Mar 12 13:55:00 crc kubenswrapper[4778]: I0312 13:55:00.887032 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a"} err="failed to get container status \"bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a\": rpc error: code = NotFound desc = could not find container \"bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a\": container with ID starting with bebe7cef5bff75eca18b4a61abf22de1f6ce00043a7457a97f1dd61e0e7a224a not found: ID does not exist" Mar 12 13:55:02 crc kubenswrapper[4778]: I0312 13:55:02.265232 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" path="/var/lib/kubelet/pods/1408ca57-ea2c-414f-93af-8c6b930c1fe4/volumes" Mar 12 13:55:28 crc kubenswrapper[4778]: I0312 13:55:28.557946 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:55:28 crc kubenswrapper[4778]: I0312 13:55:28.558469 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:55:58 crc kubenswrapper[4778]: I0312 13:55:58.557727 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:55:58 crc kubenswrapper[4778]: I0312 13:55:58.558373 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.147858 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555396-lhqkd"] Mar 12 13:56:00 crc kubenswrapper[4778]: E0312 13:56:00.148563 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2175642e-200d-49e4-b07c-e594a50dec28" containerName="extract-utilities" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.148575 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2175642e-200d-49e4-b07c-e594a50dec28" containerName="extract-utilities" Mar 12 13:56:00 crc kubenswrapper[4778]: E0312 13:56:00.148587 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2175642e-200d-49e4-b07c-e594a50dec28" containerName="extract-content" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.148592 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2175642e-200d-49e4-b07c-e594a50dec28" containerName="extract-content" Mar 12 13:56:00 crc kubenswrapper[4778]: E0312 13:56:00.148602 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerName="registry-server" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.148609 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerName="registry-server" Mar 12 13:56:00 crc kubenswrapper[4778]: E0312 13:56:00.148619 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerName="extract-utilities" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.148625 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerName="extract-utilities" Mar 12 13:56:00 crc kubenswrapper[4778]: E0312 13:56:00.148633 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2175642e-200d-49e4-b07c-e594a50dec28" containerName="registry-server" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.148638 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2175642e-200d-49e4-b07c-e594a50dec28" containerName="registry-server" Mar 12 13:56:00 crc kubenswrapper[4778]: E0312 13:56:00.148646 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerName="extract-content" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.148652 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerName="extract-content" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.148826 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1408ca57-ea2c-414f-93af-8c6b930c1fe4" containerName="registry-server" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.148837 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="2175642e-200d-49e4-b07c-e594a50dec28" containerName="registry-server" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.149485 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555396-lhqkd" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.151732 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.151869 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.152087 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.166216 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555396-lhqkd"] Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.228356 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ctl\" (UniqueName: \"kubernetes.io/projected/90b32527-d7b2-4938-a8c2-882067947e78-kube-api-access-59ctl\") pod \"auto-csr-approver-29555396-lhqkd\" (UID: \"90b32527-d7b2-4938-a8c2-882067947e78\") " pod="openshift-infra/auto-csr-approver-29555396-lhqkd" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.329875 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ctl\" (UniqueName: \"kubernetes.io/projected/90b32527-d7b2-4938-a8c2-882067947e78-kube-api-access-59ctl\") pod \"auto-csr-approver-29555396-lhqkd\" (UID: \"90b32527-d7b2-4938-a8c2-882067947e78\") " pod="openshift-infra/auto-csr-approver-29555396-lhqkd" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.349504 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ctl\" (UniqueName: \"kubernetes.io/projected/90b32527-d7b2-4938-a8c2-882067947e78-kube-api-access-59ctl\") pod \"auto-csr-approver-29555396-lhqkd\" (UID: \"90b32527-d7b2-4938-a8c2-882067947e78\") " pod="openshift-infra/auto-csr-approver-29555396-lhqkd" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.472171 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555396-lhqkd" Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.927452 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 13:56:00 crc kubenswrapper[4778]: I0312 13:56:00.927484 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555396-lhqkd"] Mar 12 13:56:01 crc kubenswrapper[4778]: I0312 13:56:01.345819 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555396-lhqkd" event={"ID":"90b32527-d7b2-4938-a8c2-882067947e78","Type":"ContainerStarted","Data":"4fba4a78ca2fd2f35531ebdaf442cdd6b4b4c347b21e38401f16971eae6edcc0"} Mar 12 13:56:04 crc kubenswrapper[4778]: I0312 13:56:04.379237 4778 generic.go:334] "Generic (PLEG): container finished" podID="90b32527-d7b2-4938-a8c2-882067947e78" containerID="f6e775ed356b4c920e47d4cd6b52c164df8562cf9b83a71ba23edcf8ae60ceb9" exitCode=0 Mar 12 13:56:04 crc kubenswrapper[4778]: I0312 13:56:04.379395 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555396-lhqkd" event={"ID":"90b32527-d7b2-4938-a8c2-882067947e78","Type":"ContainerDied","Data":"f6e775ed356b4c920e47d4cd6b52c164df8562cf9b83a71ba23edcf8ae60ceb9"} Mar 12 13:56:05 crc kubenswrapper[4778]: I0312 13:56:05.748528 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555396-lhqkd" Mar 12 13:56:05 crc kubenswrapper[4778]: I0312 13:56:05.938853 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59ctl\" (UniqueName: \"kubernetes.io/projected/90b32527-d7b2-4938-a8c2-882067947e78-kube-api-access-59ctl\") pod \"90b32527-d7b2-4938-a8c2-882067947e78\" (UID: \"90b32527-d7b2-4938-a8c2-882067947e78\") " Mar 12 13:56:05 crc kubenswrapper[4778]: I0312 13:56:05.948619 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b32527-d7b2-4938-a8c2-882067947e78-kube-api-access-59ctl" (OuterVolumeSpecName: "kube-api-access-59ctl") pod "90b32527-d7b2-4938-a8c2-882067947e78" (UID: "90b32527-d7b2-4938-a8c2-882067947e78"). InnerVolumeSpecName "kube-api-access-59ctl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:56:06 crc kubenswrapper[4778]: I0312 13:56:06.041776 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59ctl\" (UniqueName: \"kubernetes.io/projected/90b32527-d7b2-4938-a8c2-882067947e78-kube-api-access-59ctl\") on node \"crc\" DevicePath \"\"" Mar 12 13:56:06 crc kubenswrapper[4778]: I0312 13:56:06.397860 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555396-lhqkd" event={"ID":"90b32527-d7b2-4938-a8c2-882067947e78","Type":"ContainerDied","Data":"4fba4a78ca2fd2f35531ebdaf442cdd6b4b4c347b21e38401f16971eae6edcc0"} Mar 12 13:56:06 crc kubenswrapper[4778]: I0312 13:56:06.397905 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555396-lhqkd" Mar 12 13:56:06 crc kubenswrapper[4778]: I0312 13:56:06.397911 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fba4a78ca2fd2f35531ebdaf442cdd6b4b4c347b21e38401f16971eae6edcc0" Mar 12 13:56:06 crc kubenswrapper[4778]: I0312 13:56:06.822362 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555390-dml9r"] Mar 12 13:56:06 crc kubenswrapper[4778]: I0312 13:56:06.830619 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555390-dml9r"] Mar 12 13:56:08 crc kubenswrapper[4778]: I0312 13:56:08.265115 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b911b49-7b0f-48ef-9626-cd43d308d596" path="/var/lib/kubelet/pods/8b911b49-7b0f-48ef-9626-cd43d308d596/volumes" Mar 12 13:56:19 crc kubenswrapper[4778]: I0312 13:56:19.960630 4778 scope.go:117] "RemoveContainer" containerID="63aaec5f1f507e8a81d7498ca66c0663cdb9bde37e98025da0e464b4ce6c885e" Mar 12 13:56:28 crc kubenswrapper[4778]: I0312 13:56:28.558250 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:56:28 crc kubenswrapper[4778]: I0312 13:56:28.558596 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:56:28 crc kubenswrapper[4778]: I0312 13:56:28.558649 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:56:28 crc kubenswrapper[4778]: I0312 13:56:28.559445 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e20e6fa2d381e3ff917a0f6074e27521c909a7932045eacfc15c005ed843cb93"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:56:28 crc kubenswrapper[4778]: I0312 13:56:28.559499 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://e20e6fa2d381e3ff917a0f6074e27521c909a7932045eacfc15c005ed843cb93" gracePeriod=600 Mar 12 13:56:28 crc kubenswrapper[4778]: I0312 13:56:28.696158 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="e20e6fa2d381e3ff917a0f6074e27521c909a7932045eacfc15c005ed843cb93" exitCode=0 Mar 12 13:56:28 crc kubenswrapper[4778]: I0312 13:56:28.696223 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"e20e6fa2d381e3ff917a0f6074e27521c909a7932045eacfc15c005ed843cb93"} Mar 12 13:56:28 crc kubenswrapper[4778]: I0312 13:56:28.696281 4778 scope.go:117] "RemoveContainer" containerID="5d7d3c0b73016a8d7ee117c8146ea559fc88bdaa58f9d10b5498b859a6d9fa8f" Mar 12 13:56:29 crc kubenswrapper[4778]: I0312 13:56:29.705664 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0"} Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.643155 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8np5"] Mar 12 13:56:45 crc kubenswrapper[4778]: E0312 13:56:45.645619 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b32527-d7b2-4938-a8c2-882067947e78" containerName="oc" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.645728 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b32527-d7b2-4938-a8c2-882067947e78" containerName="oc" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.646070 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b32527-d7b2-4938-a8c2-882067947e78" containerName="oc" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.647903 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.808762 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dxz\" (UniqueName: \"kubernetes.io/projected/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-kube-api-access-s2dxz\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.809086 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-catalog-content\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.809145 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-utilities\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.818066 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8np5"] Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.912022 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-catalog-content\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.912268 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-utilities\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.912402 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dxz\" (UniqueName: \"kubernetes.io/projected/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-kube-api-access-s2dxz\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.912939 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-catalog-content\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.912978 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-utilities\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.939613 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dxz\" (UniqueName: \"kubernetes.io/projected/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-kube-api-access-s2dxz\") pod \"redhat-operators-k8np5\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:45 crc kubenswrapper[4778]: I0312 13:56:45.972696 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:46 crc kubenswrapper[4778]: I0312 13:56:46.433020 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8np5"] Mar 12 13:56:46 crc kubenswrapper[4778]: I0312 13:56:46.853942 4778 generic.go:334] "Generic (PLEG): container finished" podID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerID="0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c" exitCode=0 Mar 12 13:56:46 crc kubenswrapper[4778]: I0312 13:56:46.854140 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8np5" event={"ID":"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1","Type":"ContainerDied","Data":"0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c"} Mar 12 13:56:46 crc kubenswrapper[4778]: I0312 13:56:46.854306 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8np5" event={"ID":"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1","Type":"ContainerStarted","Data":"9070894b7842f0aed54c5c06b02dc0798ae95d0cfa82fdac9ef69e082f746a53"} Mar 12 13:56:50 crc kubenswrapper[4778]: I0312 13:56:50.143153 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8np5" event={"ID":"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1","Type":"ContainerStarted","Data":"4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703"} Mar 12 13:56:51 crc kubenswrapper[4778]: I0312 13:56:51.155053 4778 generic.go:334] "Generic (PLEG): container finished" podID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerID="4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703" exitCode=0 Mar 12 13:56:51 crc kubenswrapper[4778]: I0312 13:56:51.155097 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8np5" event={"ID":"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1","Type":"ContainerDied","Data":"4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703"} Mar 12 13:56:53 crc kubenswrapper[4778]: I0312 13:56:53.178581 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8np5" event={"ID":"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1","Type":"ContainerStarted","Data":"fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59"} Mar 12 13:56:55 crc kubenswrapper[4778]: I0312 13:56:55.974099 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:55 crc kubenswrapper[4778]: I0312 13:56:55.974640 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:56:57 crc kubenswrapper[4778]: I0312 13:56:57.022102 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8np5" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="registry-server" probeResult="failure" output=< Mar 12 13:56:57 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 13:56:57 crc kubenswrapper[4778]: > Mar 12 13:57:06 crc kubenswrapper[4778]: I0312 13:57:06.014904 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:57:06 crc kubenswrapper[4778]: I0312 13:57:06.044051 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8np5" podStartSLOduration=15.940376585 podStartE2EDuration="21.044031712s" podCreationTimestamp="2026-03-12 13:56:45 +0000 UTC" firstStartedPulling="2026-03-12 13:56:46.855710856 +0000 UTC m=+2825.304406252" lastFinishedPulling="2026-03-12 13:56:51.959365983 +0000 UTC m=+2830.408061379" observedRunningTime="2026-03-12 13:56:53.200219468 +0000 UTC m=+2831.648914864" watchObservedRunningTime="2026-03-12 13:57:06.044031712 +0000 UTC m=+2844.492727108" Mar 12 13:57:06 crc kubenswrapper[4778]: I0312 13:57:06.070115 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:57:06 crc kubenswrapper[4778]: I0312 13:57:06.249055 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8np5"] Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.294771 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k8np5" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="registry-server" containerID="cri-o://fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59" gracePeriod=2 Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.729483 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.780145 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-utilities\") pod \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.780396 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2dxz\" (UniqueName: \"kubernetes.io/projected/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-kube-api-access-s2dxz\") pod \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.780455 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-catalog-content\") pod \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\" (UID: \"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1\") " Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.782240 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-utilities" (OuterVolumeSpecName: "utilities") pod "1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" (UID: "1f9f45ee-d6ff-4369-b71a-1af75cc31ca1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.787850 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-kube-api-access-s2dxz" (OuterVolumeSpecName: "kube-api-access-s2dxz") pod "1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" (UID: "1f9f45ee-d6ff-4369-b71a-1af75cc31ca1"). InnerVolumeSpecName "kube-api-access-s2dxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.882704 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.882735 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2dxz\" (UniqueName: \"kubernetes.io/projected/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-kube-api-access-s2dxz\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.923102 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" (UID: "1f9f45ee-d6ff-4369-b71a-1af75cc31ca1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:57:07 crc kubenswrapper[4778]: I0312 13:57:07.985570 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.304135 4778 generic.go:334] "Generic (PLEG): container finished" podID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerID="fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59" exitCode=0 Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.304189 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8np5" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.304209 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8np5" event={"ID":"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1","Type":"ContainerDied","Data":"fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59"} Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.304606 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8np5" event={"ID":"1f9f45ee-d6ff-4369-b71a-1af75cc31ca1","Type":"ContainerDied","Data":"9070894b7842f0aed54c5c06b02dc0798ae95d0cfa82fdac9ef69e082f746a53"} Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.304625 4778 scope.go:117] "RemoveContainer" containerID="fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.328132 4778 scope.go:117] "RemoveContainer" containerID="4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.335319 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8np5"] Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.345040 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k8np5"] Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.355706 4778 scope.go:117] "RemoveContainer" containerID="0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.388089 4778 scope.go:117] "RemoveContainer" containerID="fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59" Mar 12 13:57:08 crc kubenswrapper[4778]: E0312 13:57:08.388571 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59\": container with ID starting with fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59 not found: ID does not exist" containerID="fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.388620 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59"} err="failed to get container status \"fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59\": rpc error: code = NotFound desc = could not find container \"fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59\": container with ID starting with fe9eab63bd54027dc9747d671158f830246a8cbbcc67a324e1785af946b97a59 not found: ID does not exist" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.388648 4778 scope.go:117] "RemoveContainer" containerID="4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703" Mar 12 13:57:08 crc kubenswrapper[4778]: E0312 13:57:08.389070 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703\": container with ID starting with 4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703 not found: ID does not exist" containerID="4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.389214 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703"} err="failed to get container status \"4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703\": rpc error: code = NotFound desc = could not find container \"4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703\": container with ID starting with 4439d8ae883e35b8c6e7d2722cd2f00a49b0c79d4630243975e17bc91e0d2703 not found: ID does not exist" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.389312 4778 scope.go:117] "RemoveContainer" containerID="0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c" Mar 12 13:57:08 crc kubenswrapper[4778]: E0312 13:57:08.389697 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c\": container with ID starting with 0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c not found: ID does not exist" containerID="0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c" Mar 12 13:57:08 crc kubenswrapper[4778]: I0312 13:57:08.389743 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c"} err="failed to get container status \"0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c\": rpc error: code = NotFound desc = could not find container \"0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c\": container with ID starting with 0d24b81e7152e22db15d85c0639ccdf7f6a4a5d3388d523cdaa16266ec57d80c not found: ID does not exist" Mar 12 13:57:10 crc kubenswrapper[4778]: I0312 13:57:10.264228 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" path="/var/lib/kubelet/pods/1f9f45ee-d6ff-4369-b71a-1af75cc31ca1/volumes" Mar 12 13:57:23 crc kubenswrapper[4778]: I0312 13:57:23.797045 4778 generic.go:334] "Generic (PLEG): container finished" podID="6ed77f87-e6b2-4c7a-8b0e-003106200dc8" containerID="e2a35e751ce79cb5226fd46ca73472f5cd7c47201c7bf749d5ffd3dae25fcc72" exitCode=0 Mar 12 13:57:23 crc kubenswrapper[4778]: I0312 13:57:23.797122 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" event={"ID":"6ed77f87-e6b2-4c7a-8b0e-003106200dc8","Type":"ContainerDied","Data":"e2a35e751ce79cb5226fd46ca73472f5cd7c47201c7bf749d5ffd3dae25fcc72"} Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.282610 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.457988 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-1\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458085 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-combined-ca-bundle\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458116 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp6ls\" (UniqueName: \"kubernetes.io/projected/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-kube-api-access-fp6ls\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458142 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-inventory\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458267 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-3\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458305 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-0\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458359 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-0\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458384 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-2\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458434 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-ssh-key-openstack-edpm-ipam\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458464 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-1\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.458537 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-extra-config-0\") pod \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\" (UID: \"6ed77f87-e6b2-4c7a-8b0e-003106200dc8\") " Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.478593 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.478889 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-kube-api-access-fp6ls" (OuterVolumeSpecName: "kube-api-access-fp6ls") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "kube-api-access-fp6ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.491064 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.491152 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.491406 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.492313 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.493383 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.499958 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.502519 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.506538 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-inventory" (OuterVolumeSpecName: "inventory") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.510696 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "6ed77f87-e6b2-4c7a-8b0e-003106200dc8" (UID: "6ed77f87-e6b2-4c7a-8b0e-003106200dc8"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.560325 4778 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.560548 4778 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.560700 4778 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.560766 4778 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.560824 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.560884 4778 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.560943 4778 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.560999 4778 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.561058 4778 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.561110 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp6ls\" (UniqueName: \"kubernetes.io/projected/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-kube-api-access-fp6ls\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.561169 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ed77f87-e6b2-4c7a-8b0e-003106200dc8-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.822108 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" event={"ID":"6ed77f87-e6b2-4c7a-8b0e-003106200dc8","Type":"ContainerDied","Data":"1617b96c98df28869b5a069f5b74bb8126ce4a98898565a1251dadc01020d162"} Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.822460 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1617b96c98df28869b5a069f5b74bb8126ce4a98898565a1251dadc01020d162" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.822173 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-5tw6s" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.985129 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s"] Mar 12 13:57:25 crc kubenswrapper[4778]: E0312 13:57:25.985587 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="registry-server" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.985610 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="registry-server" Mar 12 13:57:25 crc kubenswrapper[4778]: E0312 13:57:25.985629 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed77f87-e6b2-4c7a-8b0e-003106200dc8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.985641 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed77f87-e6b2-4c7a-8b0e-003106200dc8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 13:57:25 crc kubenswrapper[4778]: E0312 13:57:25.985660 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="extract-utilities" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.985669 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="extract-utilities" Mar 12 13:57:25 crc kubenswrapper[4778]: E0312 13:57:25.985694 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="extract-content" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.985701 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="extract-content" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.985928 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9f45ee-d6ff-4369-b71a-1af75cc31ca1" containerName="registry-server" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.985954 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed77f87-e6b2-4c7a-8b0e-003106200dc8" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.986755 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.991779 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.991805 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.992365 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.992563 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.993997 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qn2vx" Mar 12 13:57:25 crc kubenswrapper[4778]: I0312 13:57:25.997629 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s"] Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.171812 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.171897 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szp5b\" (UniqueName: \"kubernetes.io/projected/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-kube-api-access-szp5b\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.171976 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.172028 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.172066 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.172136 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.172168 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.275355 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.275489 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.275740 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.276542 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.276588 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.276776 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.276829 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szp5b\" (UniqueName: \"kubernetes.io/projected/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-kube-api-access-szp5b\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.281396 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.281397 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.281790 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.281898 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.282732 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.282860 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.298334 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szp5b\" (UniqueName: \"kubernetes.io/projected/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-kube-api-access-szp5b\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.323835 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 13:57:26 crc kubenswrapper[4778]: I0312 13:57:26.847446 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s"] Mar 12 13:57:26 crc kubenswrapper[4778]: W0312 13:57:26.852800 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bfaafaf_36fb_4f1a_99ed_abb8b7bb4ae1.slice/crio-18c5ac6148c2a101f7ddccfc3e584f782b4abb6f5ed1d881a005c0fa17c5788f WatchSource:0}: Error finding container 18c5ac6148c2a101f7ddccfc3e584f782b4abb6f5ed1d881a005c0fa17c5788f: Status 404 returned error can't find the container with id 18c5ac6148c2a101f7ddccfc3e584f782b4abb6f5ed1d881a005c0fa17c5788f Mar 12 13:57:27 crc kubenswrapper[4778]: I0312 13:57:27.841161 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" event={"ID":"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1","Type":"ContainerStarted","Data":"a819986e64064460805e2e891a9205102c157b3069f167134a0bc6192d083ab6"} Mar 12 13:57:27 crc kubenswrapper[4778]: I0312 13:57:27.841216 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" event={"ID":"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1","Type":"ContainerStarted","Data":"18c5ac6148c2a101f7ddccfc3e584f782b4abb6f5ed1d881a005c0fa17c5788f"} Mar 12 13:57:27 crc kubenswrapper[4778]: I0312 13:57:27.871231 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" podStartSLOduration=2.433829751 podStartE2EDuration="2.871211205s" podCreationTimestamp="2026-03-12 13:57:25 +0000 UTC" firstStartedPulling="2026-03-12 13:57:26.860853045 +0000 UTC m=+2865.309548441" lastFinishedPulling="2026-03-12 13:57:27.298234509 +0000 UTC m=+2865.746929895" observedRunningTime="2026-03-12 13:57:27.869590648 +0000 UTC m=+2866.318286054" watchObservedRunningTime="2026-03-12 13:57:27.871211205 +0000 UTC m=+2866.319906601" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.146788 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555398-hhchd"] Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.148716 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555398-hhchd" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.152935 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.153404 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.153697 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.160927 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555398-hhchd"] Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.223120 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22x86\" (UniqueName: \"kubernetes.io/projected/6a479324-f9a1-4095-a0b1-7c22fc72eb61-kube-api-access-22x86\") pod \"auto-csr-approver-29555398-hhchd\" (UID: \"6a479324-f9a1-4095-a0b1-7c22fc72eb61\") " pod="openshift-infra/auto-csr-approver-29555398-hhchd" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.325657 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22x86\" (UniqueName: \"kubernetes.io/projected/6a479324-f9a1-4095-a0b1-7c22fc72eb61-kube-api-access-22x86\") pod \"auto-csr-approver-29555398-hhchd\" (UID: \"6a479324-f9a1-4095-a0b1-7c22fc72eb61\") " pod="openshift-infra/auto-csr-approver-29555398-hhchd" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.343817 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22x86\" (UniqueName: \"kubernetes.io/projected/6a479324-f9a1-4095-a0b1-7c22fc72eb61-kube-api-access-22x86\") pod \"auto-csr-approver-29555398-hhchd\" (UID: \"6a479324-f9a1-4095-a0b1-7c22fc72eb61\") " pod="openshift-infra/auto-csr-approver-29555398-hhchd" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.469961 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555398-hhchd" Mar 12 13:58:00 crc kubenswrapper[4778]: I0312 13:58:00.967688 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555398-hhchd"] Mar 12 13:58:00 crc kubenswrapper[4778]: W0312 13:58:00.968723 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a479324_f9a1_4095_a0b1_7c22fc72eb61.slice/crio-f6c428e58cccf2f5ed6f1ed21daee2d015ece5d13f0cd9940356894ca7670473 WatchSource:0}: Error finding container f6c428e58cccf2f5ed6f1ed21daee2d015ece5d13f0cd9940356894ca7670473: Status 404 returned error can't find the container with id f6c428e58cccf2f5ed6f1ed21daee2d015ece5d13f0cd9940356894ca7670473 Mar 12 13:58:01 crc kubenswrapper[4778]: I0312 13:58:01.123279 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555398-hhchd" event={"ID":"6a479324-f9a1-4095-a0b1-7c22fc72eb61","Type":"ContainerStarted","Data":"f6c428e58cccf2f5ed6f1ed21daee2d015ece5d13f0cd9940356894ca7670473"} Mar 12 13:58:03 crc kubenswrapper[4778]: I0312 13:58:03.157887 4778 generic.go:334] "Generic (PLEG): container finished" podID="6a479324-f9a1-4095-a0b1-7c22fc72eb61" containerID="7f07e770195234611f35ac5fc4d8c046a4e07dd2e554f881a1c216e51689e210" exitCode=0 Mar 12 13:58:03 crc kubenswrapper[4778]: I0312 13:58:03.158013 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555398-hhchd" event={"ID":"6a479324-f9a1-4095-a0b1-7c22fc72eb61","Type":"ContainerDied","Data":"7f07e770195234611f35ac5fc4d8c046a4e07dd2e554f881a1c216e51689e210"} Mar 12 13:58:04 crc kubenswrapper[4778]: I0312 13:58:04.638640 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555398-hhchd" Mar 12 13:58:04 crc kubenswrapper[4778]: I0312 13:58:04.726020 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22x86\" (UniqueName: \"kubernetes.io/projected/6a479324-f9a1-4095-a0b1-7c22fc72eb61-kube-api-access-22x86\") pod \"6a479324-f9a1-4095-a0b1-7c22fc72eb61\" (UID: \"6a479324-f9a1-4095-a0b1-7c22fc72eb61\") " Mar 12 13:58:04 crc kubenswrapper[4778]: I0312 13:58:04.732140 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a479324-f9a1-4095-a0b1-7c22fc72eb61-kube-api-access-22x86" (OuterVolumeSpecName: "kube-api-access-22x86") pod "6a479324-f9a1-4095-a0b1-7c22fc72eb61" (UID: "6a479324-f9a1-4095-a0b1-7c22fc72eb61"). InnerVolumeSpecName "kube-api-access-22x86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:58:04 crc kubenswrapper[4778]: I0312 13:58:04.828427 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22x86\" (UniqueName: \"kubernetes.io/projected/6a479324-f9a1-4095-a0b1-7c22fc72eb61-kube-api-access-22x86\") on node \"crc\" DevicePath \"\"" Mar 12 13:58:05 crc kubenswrapper[4778]: I0312 13:58:05.180342 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555398-hhchd" event={"ID":"6a479324-f9a1-4095-a0b1-7c22fc72eb61","Type":"ContainerDied","Data":"f6c428e58cccf2f5ed6f1ed21daee2d015ece5d13f0cd9940356894ca7670473"} Mar 12 13:58:05 crc kubenswrapper[4778]: I0312 13:58:05.180401 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6c428e58cccf2f5ed6f1ed21daee2d015ece5d13f0cd9940356894ca7670473" Mar 12 13:58:05 crc kubenswrapper[4778]: I0312 13:58:05.180438 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555398-hhchd" Mar 12 13:58:05 crc kubenswrapper[4778]: I0312 13:58:05.711756 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555392-wg78w"] Mar 12 13:58:05 crc kubenswrapper[4778]: I0312 13:58:05.723627 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555392-wg78w"] Mar 12 13:58:06 crc kubenswrapper[4778]: I0312 13:58:06.266832 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0b5070-03d8-45fe-8148-c39a9b560fbb" path="/var/lib/kubelet/pods/0a0b5070-03d8-45fe-8148-c39a9b560fbb/volumes" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.155741 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4gq97"] Mar 12 13:58:13 crc kubenswrapper[4778]: E0312 13:58:13.156912 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a479324-f9a1-4095-a0b1-7c22fc72eb61" containerName="oc" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.156930 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a479324-f9a1-4095-a0b1-7c22fc72eb61" containerName="oc" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.157282 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a479324-f9a1-4095-a0b1-7c22fc72eb61" containerName="oc" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.159338 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.167949 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gq97"] Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.286368 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-catalog-content\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.286419 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-utilities\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.286443 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc9p5\" (UniqueName: \"kubernetes.io/projected/aa7fbd44-1786-4285-a062-f10d0971645b-kube-api-access-hc9p5\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.388756 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-catalog-content\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.388816 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-utilities\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.388842 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc9p5\" (UniqueName: \"kubernetes.io/projected/aa7fbd44-1786-4285-a062-f10d0971645b-kube-api-access-hc9p5\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.389660 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-catalog-content\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.389721 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-utilities\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.412624 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc9p5\" (UniqueName: \"kubernetes.io/projected/aa7fbd44-1786-4285-a062-f10d0971645b-kube-api-access-hc9p5\") pod \"redhat-marketplace-4gq97\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.480514 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:13 crc kubenswrapper[4778]: I0312 13:58:13.954693 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gq97"] Mar 12 13:58:14 crc kubenswrapper[4778]: I0312 13:58:14.266600 4778 generic.go:334] "Generic (PLEG): container finished" podID="aa7fbd44-1786-4285-a062-f10d0971645b" containerID="c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408" exitCode=0 Mar 12 13:58:14 crc kubenswrapper[4778]: I0312 13:58:14.266642 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gq97" event={"ID":"aa7fbd44-1786-4285-a062-f10d0971645b","Type":"ContainerDied","Data":"c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408"} Mar 12 13:58:14 crc kubenswrapper[4778]: I0312 13:58:14.266666 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gq97" event={"ID":"aa7fbd44-1786-4285-a062-f10d0971645b","Type":"ContainerStarted","Data":"e99743205e6d113d16d820625aa3ddcd4fd4ebfa7d55f88ae708417e22c46325"} Mar 12 13:58:15 crc kubenswrapper[4778]: I0312 13:58:15.284745 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gq97" event={"ID":"aa7fbd44-1786-4285-a062-f10d0971645b","Type":"ContainerStarted","Data":"f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823"} Mar 12 13:58:16 crc kubenswrapper[4778]: I0312 13:58:16.297775 4778 generic.go:334] "Generic (PLEG): container finished" podID="aa7fbd44-1786-4285-a062-f10d0971645b" containerID="f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823" exitCode=0 Mar 12 13:58:16 crc kubenswrapper[4778]: I0312 13:58:16.297864 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gq97" event={"ID":"aa7fbd44-1786-4285-a062-f10d0971645b","Type":"ContainerDied","Data":"f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823"} Mar 12 13:58:17 crc kubenswrapper[4778]: I0312 13:58:17.309817 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gq97" event={"ID":"aa7fbd44-1786-4285-a062-f10d0971645b","Type":"ContainerStarted","Data":"d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56"} Mar 12 13:58:17 crc kubenswrapper[4778]: I0312 13:58:17.332395 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4gq97" podStartSLOduration=1.8536014889999999 podStartE2EDuration="4.332377105s" podCreationTimestamp="2026-03-12 13:58:13 +0000 UTC" firstStartedPulling="2026-03-12 13:58:14.268622413 +0000 UTC m=+2912.717317809" lastFinishedPulling="2026-03-12 13:58:16.747398029 +0000 UTC m=+2915.196093425" observedRunningTime="2026-03-12 13:58:17.330895923 +0000 UTC m=+2915.779591319" watchObservedRunningTime="2026-03-12 13:58:17.332377105 +0000 UTC m=+2915.781072511" Mar 12 13:58:20 crc kubenswrapper[4778]: I0312 13:58:20.086914 4778 scope.go:117] "RemoveContainer" containerID="1f9b06fe647c9c9d52674fc3e58e1c9d5c930036da2b4f235a350fc83217496f" Mar 12 13:58:23 crc kubenswrapper[4778]: I0312 13:58:23.481138 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:23 crc kubenswrapper[4778]: I0312 13:58:23.482437 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:23 crc kubenswrapper[4778]: I0312 13:58:23.528450 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:24 crc kubenswrapper[4778]: I0312 13:58:24.424127 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:24 crc kubenswrapper[4778]: I0312 13:58:24.475838 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gq97"] Mar 12 13:58:26 crc kubenswrapper[4778]: I0312 13:58:26.388876 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4gq97" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" containerName="registry-server" containerID="cri-o://d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56" gracePeriod=2 Mar 12 13:58:26 crc kubenswrapper[4778]: I0312 13:58:26.881924 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:26 crc kubenswrapper[4778]: I0312 13:58:26.924523 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc9p5\" (UniqueName: \"kubernetes.io/projected/aa7fbd44-1786-4285-a062-f10d0971645b-kube-api-access-hc9p5\") pod \"aa7fbd44-1786-4285-a062-f10d0971645b\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " Mar 12 13:58:26 crc kubenswrapper[4778]: I0312 13:58:26.924635 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-catalog-content\") pod \"aa7fbd44-1786-4285-a062-f10d0971645b\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " Mar 12 13:58:26 crc kubenswrapper[4778]: I0312 13:58:26.924688 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-utilities\") pod \"aa7fbd44-1786-4285-a062-f10d0971645b\" (UID: \"aa7fbd44-1786-4285-a062-f10d0971645b\") " Mar 12 13:58:26 crc kubenswrapper[4778]: I0312 13:58:26.925730 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-utilities" (OuterVolumeSpecName: "utilities") pod "aa7fbd44-1786-4285-a062-f10d0971645b" (UID: "aa7fbd44-1786-4285-a062-f10d0971645b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:58:26 crc kubenswrapper[4778]: I0312 13:58:26.935042 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7fbd44-1786-4285-a062-f10d0971645b-kube-api-access-hc9p5" (OuterVolumeSpecName: "kube-api-access-hc9p5") pod "aa7fbd44-1786-4285-a062-f10d0971645b" (UID: "aa7fbd44-1786-4285-a062-f10d0971645b"). InnerVolumeSpecName "kube-api-access-hc9p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 13:58:26 crc kubenswrapper[4778]: I0312 13:58:26.956711 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa7fbd44-1786-4285-a062-f10d0971645b" (UID: "aa7fbd44-1786-4285-a062-f10d0971645b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.027371 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.027416 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc9p5\" (UniqueName: \"kubernetes.io/projected/aa7fbd44-1786-4285-a062-f10d0971645b-kube-api-access-hc9p5\") on node \"crc\" DevicePath \"\"" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.027434 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7fbd44-1786-4285-a062-f10d0971645b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.398985 4778 generic.go:334] "Generic (PLEG): container finished" podID="aa7fbd44-1786-4285-a062-f10d0971645b" containerID="d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56" exitCode=0 Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.399026 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gq97" event={"ID":"aa7fbd44-1786-4285-a062-f10d0971645b","Type":"ContainerDied","Data":"d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56"} Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.399030 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gq97" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.399058 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gq97" event={"ID":"aa7fbd44-1786-4285-a062-f10d0971645b","Type":"ContainerDied","Data":"e99743205e6d113d16d820625aa3ddcd4fd4ebfa7d55f88ae708417e22c46325"} Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.399074 4778 scope.go:117] "RemoveContainer" containerID="d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.421038 4778 scope.go:117] "RemoveContainer" containerID="f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.434588 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gq97"] Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.446131 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gq97"] Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.454935 4778 scope.go:117] "RemoveContainer" containerID="c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.489653 4778 scope.go:117] "RemoveContainer" containerID="d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56" Mar 12 13:58:27 crc kubenswrapper[4778]: E0312 13:58:27.490153 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56\": container with ID starting with d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56 not found: ID does not exist" containerID="d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.490279 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56"} err="failed to get container status \"d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56\": rpc error: code = NotFound desc = could not find container \"d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56\": container with ID starting with d51ebac645c914d827233a7e05640db3535b9bea5531b876c39672135af80a56 not found: ID does not exist" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.490304 4778 scope.go:117] "RemoveContainer" containerID="f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823" Mar 12 13:58:27 crc kubenswrapper[4778]: E0312 13:58:27.490577 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823\": container with ID starting with f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823 not found: ID does not exist" containerID="f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.490611 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823"} err="failed to get container status \"f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823\": rpc error: code = NotFound desc = could not find container \"f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823\": container with ID starting with f3fddcc47109eae320c7476971617745b962d7a8925cfec715a7e6160df6a823 not found: ID does not exist" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.490630 4778 scope.go:117] "RemoveContainer" containerID="c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408" Mar 12 13:58:27 crc kubenswrapper[4778]: E0312 13:58:27.490996 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408\": container with ID starting with c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408 not found: ID does not exist" containerID="c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408" Mar 12 13:58:27 crc kubenswrapper[4778]: I0312 13:58:27.491021 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408"} err="failed to get container status \"c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408\": rpc error: code = NotFound desc = could not find container \"c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408\": container with ID starting with c586eba3448e91b65e6147aa9ecb68c6b2bc24f9a1b32cc0aebf52545c4c6408 not found: ID does not exist" Mar 12 13:58:28 crc kubenswrapper[4778]: I0312 13:58:28.263817 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" path="/var/lib/kubelet/pods/aa7fbd44-1786-4285-a062-f10d0971645b/volumes" Mar 12 13:58:28 crc kubenswrapper[4778]: I0312 13:58:28.558007 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:58:28 crc kubenswrapper[4778]: I0312 13:58:28.558065 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:58:58 crc kubenswrapper[4778]: I0312 13:58:58.558451 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:58:58 crc kubenswrapper[4778]: I0312 13:58:58.559084 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:59:28 crc kubenswrapper[4778]: I0312 13:59:28.557687 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 13:59:28 crc kubenswrapper[4778]: I0312 13:59:28.558237 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 13:59:28 crc kubenswrapper[4778]: I0312 13:59:28.558290 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 13:59:28 crc kubenswrapper[4778]: I0312 13:59:28.559238 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 13:59:28 crc kubenswrapper[4778]: I0312 13:59:28.559289 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" gracePeriod=600 Mar 12 13:59:28 crc kubenswrapper[4778]: E0312 13:59:28.687979 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:59:29 crc kubenswrapper[4778]: I0312 13:59:29.276491 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" exitCode=0 Mar 12 13:59:29 crc kubenswrapper[4778]: I0312 13:59:29.276524 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0"} Mar 12 13:59:29 crc kubenswrapper[4778]: I0312 13:59:29.276576 4778 scope.go:117] "RemoveContainer" containerID="e20e6fa2d381e3ff917a0f6074e27521c909a7932045eacfc15c005ed843cb93" Mar 12 13:59:29 crc kubenswrapper[4778]: I0312 13:59:29.277335 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 13:59:29 crc kubenswrapper[4778]: E0312 13:59:29.277646 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:59:43 crc kubenswrapper[4778]: I0312 13:59:43.308306 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 13:59:43 crc kubenswrapper[4778]: E0312 13:59:43.309040 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 13:59:58 crc kubenswrapper[4778]: I0312 13:59:58.254483 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 13:59:58 crc kubenswrapper[4778]: E0312 13:59:58.255515 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.156601 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555400-c5pzt"] Mar 12 14:00:00 crc kubenswrapper[4778]: E0312 14:00:00.158925 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" containerName="extract-content" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.158958 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" containerName="extract-content" Mar 12 14:00:00 crc kubenswrapper[4778]: E0312 14:00:00.158974 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" containerName="extract-utilities" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.158983 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" containerName="extract-utilities" Mar 12 14:00:00 crc kubenswrapper[4778]: E0312 14:00:00.159019 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" containerName="registry-server" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.159030 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" containerName="registry-server" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.159312 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7fbd44-1786-4285-a062-f10d0971645b" containerName="registry-server" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.160276 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555400-c5pzt" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.280389 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.280726 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.293921 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.297842 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555400-c5pzt"] Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.323882 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8"] Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.325435 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.327733 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.328233 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.340335 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5fv\" (UniqueName: \"kubernetes.io/projected/2425d74f-ef53-43bc-8c8f-976333a9cc6a-kube-api-access-kt5fv\") pod \"auto-csr-approver-29555400-c5pzt\" (UID: \"2425d74f-ef53-43bc-8c8f-976333a9cc6a\") " pod="openshift-infra/auto-csr-approver-29555400-c5pzt" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.345518 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8"] Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.442955 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5fv\" (UniqueName: \"kubernetes.io/projected/2425d74f-ef53-43bc-8c8f-976333a9cc6a-kube-api-access-kt5fv\") pod \"auto-csr-approver-29555400-c5pzt\" (UID: \"2425d74f-ef53-43bc-8c8f-976333a9cc6a\") " pod="openshift-infra/auto-csr-approver-29555400-c5pzt" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.443036 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkzm\" (UniqueName: \"kubernetes.io/projected/5d85560c-89e4-4723-beb0-aeda87d0791a-kube-api-access-6jkzm\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.443093 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d85560c-89e4-4723-beb0-aeda87d0791a-secret-volume\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.443142 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d85560c-89e4-4723-beb0-aeda87d0791a-config-volume\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.465027 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5fv\" (UniqueName: \"kubernetes.io/projected/2425d74f-ef53-43bc-8c8f-976333a9cc6a-kube-api-access-kt5fv\") pod \"auto-csr-approver-29555400-c5pzt\" (UID: \"2425d74f-ef53-43bc-8c8f-976333a9cc6a\") " pod="openshift-infra/auto-csr-approver-29555400-c5pzt" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.544534 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d85560c-89e4-4723-beb0-aeda87d0791a-config-volume\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.545729 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkzm\" (UniqueName: \"kubernetes.io/projected/5d85560c-89e4-4723-beb0-aeda87d0791a-kube-api-access-6jkzm\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.545842 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d85560c-89e4-4723-beb0-aeda87d0791a-secret-volume\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.546369 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d85560c-89e4-4723-beb0-aeda87d0791a-config-volume\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.559459 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d85560c-89e4-4723-beb0-aeda87d0791a-secret-volume\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.563623 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkzm\" (UniqueName: \"kubernetes.io/projected/5d85560c-89e4-4723-beb0-aeda87d0791a-kube-api-access-6jkzm\") pod \"collect-profiles-29555400-lrxd8\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.623168 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555400-c5pzt" Mar 12 14:00:00 crc kubenswrapper[4778]: I0312 14:00:00.645743 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:01 crc kubenswrapper[4778]: I0312 14:00:01.090136 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555400-c5pzt"] Mar 12 14:00:01 crc kubenswrapper[4778]: I0312 14:00:01.147971 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8"] Mar 12 14:00:01 crc kubenswrapper[4778]: W0312 14:00:01.152014 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d85560c_89e4_4723_beb0_aeda87d0791a.slice/crio-9df22d6360280afbb45c874c3f1c87b75dd48d3df4b442954817d575f3b7402f WatchSource:0}: Error finding container 9df22d6360280afbb45c874c3f1c87b75dd48d3df4b442954817d575f3b7402f: Status 404 returned error can't find the container with id 9df22d6360280afbb45c874c3f1c87b75dd48d3df4b442954817d575f3b7402f Mar 12 14:00:01 crc kubenswrapper[4778]: I0312 14:00:01.601637 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555400-c5pzt" event={"ID":"2425d74f-ef53-43bc-8c8f-976333a9cc6a","Type":"ContainerStarted","Data":"09af7cee21adeff6c51ad2ef43b20a2fcd251532a1429e097c4b6c46a2a78a68"} Mar 12 14:00:01 crc kubenswrapper[4778]: I0312 14:00:01.622602 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" event={"ID":"5d85560c-89e4-4723-beb0-aeda87d0791a","Type":"ContainerStarted","Data":"96aa4949ff208afd2c193ba8303ee15ee08731bdac5eecb0faaa4ff029a2c93a"} Mar 12 14:00:01 crc kubenswrapper[4778]: I0312 14:00:01.622672 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" event={"ID":"5d85560c-89e4-4723-beb0-aeda87d0791a","Type":"ContainerStarted","Data":"9df22d6360280afbb45c874c3f1c87b75dd48d3df4b442954817d575f3b7402f"} Mar 12 14:00:01 crc kubenswrapper[4778]: I0312 14:00:01.649314 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" podStartSLOduration=1.649294212 podStartE2EDuration="1.649294212s" podCreationTimestamp="2026-03-12 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:00:01.645968618 +0000 UTC m=+3020.094664014" watchObservedRunningTime="2026-03-12 14:00:01.649294212 +0000 UTC m=+3020.097989608" Mar 12 14:00:02 crc kubenswrapper[4778]: I0312 14:00:02.632676 4778 generic.go:334] "Generic (PLEG): container finished" podID="5d85560c-89e4-4723-beb0-aeda87d0791a" containerID="96aa4949ff208afd2c193ba8303ee15ee08731bdac5eecb0faaa4ff029a2c93a" exitCode=0 Mar 12 14:00:02 crc kubenswrapper[4778]: I0312 14:00:02.632741 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" event={"ID":"5d85560c-89e4-4723-beb0-aeda87d0791a","Type":"ContainerDied","Data":"96aa4949ff208afd2c193ba8303ee15ee08731bdac5eecb0faaa4ff029a2c93a"} Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.040200 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.323196 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d85560c-89e4-4723-beb0-aeda87d0791a-config-volume\") pod \"5d85560c-89e4-4723-beb0-aeda87d0791a\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.323306 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d85560c-89e4-4723-beb0-aeda87d0791a-secret-volume\") pod \"5d85560c-89e4-4723-beb0-aeda87d0791a\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.323386 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jkzm\" (UniqueName: \"kubernetes.io/projected/5d85560c-89e4-4723-beb0-aeda87d0791a-kube-api-access-6jkzm\") pod \"5d85560c-89e4-4723-beb0-aeda87d0791a\" (UID: \"5d85560c-89e4-4723-beb0-aeda87d0791a\") " Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.335198 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d85560c-89e4-4723-beb0-aeda87d0791a-config-volume" (OuterVolumeSpecName: "config-volume") pod "5d85560c-89e4-4723-beb0-aeda87d0791a" (UID: "5d85560c-89e4-4723-beb0-aeda87d0791a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.342139 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d85560c-89e4-4723-beb0-aeda87d0791a-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.378907 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d85560c-89e4-4723-beb0-aeda87d0791a-kube-api-access-6jkzm" (OuterVolumeSpecName: "kube-api-access-6jkzm") pod "5d85560c-89e4-4723-beb0-aeda87d0791a" (UID: "5d85560c-89e4-4723-beb0-aeda87d0791a"). InnerVolumeSpecName "kube-api-access-6jkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.393361 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d85560c-89e4-4723-beb0-aeda87d0791a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5d85560c-89e4-4723-beb0-aeda87d0791a" (UID: "5d85560c-89e4-4723-beb0-aeda87d0791a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.451731 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d85560c-89e4-4723-beb0-aeda87d0791a-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.451953 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jkzm\" (UniqueName: \"kubernetes.io/projected/5d85560c-89e4-4723-beb0-aeda87d0791a-kube-api-access-6jkzm\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.654860 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" event={"ID":"5d85560c-89e4-4723-beb0-aeda87d0791a","Type":"ContainerDied","Data":"9df22d6360280afbb45c874c3f1c87b75dd48d3df4b442954817d575f3b7402f"} Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.654910 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df22d6360280afbb45c874c3f1c87b75dd48d3df4b442954817d575f3b7402f" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.654982 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8" Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.710732 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555355-68226"] Mar 12 14:00:04 crc kubenswrapper[4778]: I0312 14:00:04.718713 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555355-68226"] Mar 12 14:00:05 crc kubenswrapper[4778]: I0312 14:00:05.668626 4778 generic.go:334] "Generic (PLEG): container finished" podID="2425d74f-ef53-43bc-8c8f-976333a9cc6a" containerID="363f3ad00ca01b087e83fcbce9630716537dd1aa2dde624be9a2f51cfec1e8a6" exitCode=0 Mar 12 14:00:05 crc kubenswrapper[4778]: I0312 14:00:05.668708 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555400-c5pzt" event={"ID":"2425d74f-ef53-43bc-8c8f-976333a9cc6a","Type":"ContainerDied","Data":"363f3ad00ca01b087e83fcbce9630716537dd1aa2dde624be9a2f51cfec1e8a6"} Mar 12 14:00:06 crc kubenswrapper[4778]: I0312 14:00:06.265311 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6197b3a9-f02f-4e5d-8196-b617fffa467d" path="/var/lib/kubelet/pods/6197b3a9-f02f-4e5d-8196-b617fffa467d/volumes" Mar 12 14:00:07 crc kubenswrapper[4778]: I0312 14:00:07.106573 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555400-c5pzt" Mar 12 14:00:07 crc kubenswrapper[4778]: I0312 14:00:07.268477 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5fv\" (UniqueName: \"kubernetes.io/projected/2425d74f-ef53-43bc-8c8f-976333a9cc6a-kube-api-access-kt5fv\") pod \"2425d74f-ef53-43bc-8c8f-976333a9cc6a\" (UID: \"2425d74f-ef53-43bc-8c8f-976333a9cc6a\") " Mar 12 14:00:07 crc kubenswrapper[4778]: I0312 14:00:07.278038 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2425d74f-ef53-43bc-8c8f-976333a9cc6a-kube-api-access-kt5fv" (OuterVolumeSpecName: "kube-api-access-kt5fv") pod "2425d74f-ef53-43bc-8c8f-976333a9cc6a" (UID: "2425d74f-ef53-43bc-8c8f-976333a9cc6a"). InnerVolumeSpecName "kube-api-access-kt5fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:00:07 crc kubenswrapper[4778]: I0312 14:00:07.372035 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5fv\" (UniqueName: \"kubernetes.io/projected/2425d74f-ef53-43bc-8c8f-976333a9cc6a-kube-api-access-kt5fv\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:07 crc kubenswrapper[4778]: I0312 14:00:07.693227 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555400-c5pzt" event={"ID":"2425d74f-ef53-43bc-8c8f-976333a9cc6a","Type":"ContainerDied","Data":"09af7cee21adeff6c51ad2ef43b20a2fcd251532a1429e097c4b6c46a2a78a68"} Mar 12 14:00:07 crc kubenswrapper[4778]: I0312 14:00:07.693453 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09af7cee21adeff6c51ad2ef43b20a2fcd251532a1429e097c4b6c46a2a78a68" Mar 12 14:00:07 crc kubenswrapper[4778]: I0312 14:00:07.693323 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555400-c5pzt" Mar 12 14:00:08 crc kubenswrapper[4778]: I0312 14:00:08.162099 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555394-7f7nf"] Mar 12 14:00:08 crc kubenswrapper[4778]: I0312 14:00:08.172716 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555394-7f7nf"] Mar 12 14:00:08 crc kubenswrapper[4778]: I0312 14:00:08.270392 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65fbb68a-57a2-40bf-9149-6cfe13fe147c" path="/var/lib/kubelet/pods/65fbb68a-57a2-40bf-9149-6cfe13fe147c/volumes" Mar 12 14:00:11 crc kubenswrapper[4778]: I0312 14:00:11.326143 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:00:11 crc kubenswrapper[4778]: E0312 14:00:11.326828 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:00:20 crc kubenswrapper[4778]: I0312 14:00:20.187934 4778 scope.go:117] "RemoveContainer" containerID="3954f4afdb430b04a44fc16681134a45669f465399452c67b26950fbb78cb40a" Mar 12 14:00:20 crc kubenswrapper[4778]: I0312 14:00:20.213278 4778 scope.go:117] "RemoveContainer" containerID="b93a8a130b5f9b7d0852157c6942677a4b8f445ae1cc7062b429977ab9491779" Mar 12 14:00:22 crc kubenswrapper[4778]: I0312 14:00:22.261819 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:00:22 crc kubenswrapper[4778]: E0312 14:00:22.262497 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:00:34 crc kubenswrapper[4778]: I0312 14:00:34.001767 4778 generic.go:334] "Generic (PLEG): container finished" podID="2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" containerID="a819986e64064460805e2e891a9205102c157b3069f167134a0bc6192d083ab6" exitCode=0 Mar 12 14:00:34 crc kubenswrapper[4778]: I0312 14:00:34.001834 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" event={"ID":"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1","Type":"ContainerDied","Data":"a819986e64064460805e2e891a9205102c157b3069f167134a0bc6192d083ab6"} Mar 12 14:00:34 crc kubenswrapper[4778]: I0312 14:00:34.254070 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:00:34 crc kubenswrapper[4778]: E0312 14:00:34.254337 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.400689 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.468372 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-0\") pod \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.468482 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-1\") pod \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.468529 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-2\") pod \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.468550 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-inventory\") pod \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.468586 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ssh-key-openstack-edpm-ipam\") pod \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.468633 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-telemetry-combined-ca-bundle\") pod \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.468724 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szp5b\" (UniqueName: \"kubernetes.io/projected/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-kube-api-access-szp5b\") pod \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\" (UID: \"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1\") " Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.474492 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" (UID: "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.474923 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-kube-api-access-szp5b" (OuterVolumeSpecName: "kube-api-access-szp5b") pod "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" (UID: "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1"). InnerVolumeSpecName "kube-api-access-szp5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.496265 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" (UID: "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.498301 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" (UID: "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.500091 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" (UID: "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.500510 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" (UID: "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.501870 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-inventory" (OuterVolumeSpecName: "inventory") pod "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" (UID: "2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.570969 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szp5b\" (UniqueName: \"kubernetes.io/projected/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-kube-api-access-szp5b\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.571302 4778 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.571314 4778 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.571323 4778 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.571335 4778 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-inventory\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.571344 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:35 crc kubenswrapper[4778]: I0312 14:00:35.571354 4778 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:00:36 crc kubenswrapper[4778]: I0312 14:00:36.018544 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" event={"ID":"2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1","Type":"ContainerDied","Data":"18c5ac6148c2a101f7ddccfc3e584f782b4abb6f5ed1d881a005c0fa17c5788f"} Mar 12 14:00:36 crc kubenswrapper[4778]: I0312 14:00:36.018584 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c5ac6148c2a101f7ddccfc3e584f782b4abb6f5ed1d881a005c0fa17c5788f" Mar 12 14:00:36 crc kubenswrapper[4778]: I0312 14:00:36.018611 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s" Mar 12 14:00:46 crc kubenswrapper[4778]: I0312 14:00:46.254252 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:00:46 crc kubenswrapper[4778]: E0312 14:00:46.255390 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:00:58 crc kubenswrapper[4778]: I0312 14:00:58.253987 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:00:58 crc kubenswrapper[4778]: E0312 14:00:58.254853 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.157620 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29555401-vjgkl"] Mar 12 14:01:00 crc kubenswrapper[4778]: E0312 14:01:00.158722 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.158743 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 14:01:00 crc kubenswrapper[4778]: E0312 14:01:00.158758 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d85560c-89e4-4723-beb0-aeda87d0791a" containerName="collect-profiles" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.158777 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d85560c-89e4-4723-beb0-aeda87d0791a" containerName="collect-profiles" Mar 12 14:01:00 crc kubenswrapper[4778]: E0312 14:01:00.158790 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2425d74f-ef53-43bc-8c8f-976333a9cc6a" containerName="oc" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.158797 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2425d74f-ef53-43bc-8c8f-976333a9cc6a" containerName="oc" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.159041 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.159056 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="2425d74f-ef53-43bc-8c8f-976333a9cc6a" containerName="oc" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.159080 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d85560c-89e4-4723-beb0-aeda87d0791a" containerName="collect-profiles" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.159892 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.198501 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29555401-vjgkl"] Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.225015 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-config-data\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.225092 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb9xz\" (UniqueName: \"kubernetes.io/projected/e4df6927-3452-4b36-b59a-a1fdcd4272a4-kube-api-access-lb9xz\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.225122 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-fernet-keys\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.225235 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-combined-ca-bundle\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.328091 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-combined-ca-bundle\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.332336 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-config-data\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.340037 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb9xz\" (UniqueName: \"kubernetes.io/projected/e4df6927-3452-4b36-b59a-a1fdcd4272a4-kube-api-access-lb9xz\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.340137 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-fernet-keys\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.342541 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-combined-ca-bundle\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.343901 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-config-data\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.344467 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-fernet-keys\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.371213 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb9xz\" (UniqueName: \"kubernetes.io/projected/e4df6927-3452-4b36-b59a-a1fdcd4272a4-kube-api-access-lb9xz\") pod \"keystone-cron-29555401-vjgkl\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.513911 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:00 crc kubenswrapper[4778]: I0312 14:01:00.975969 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29555401-vjgkl"] Mar 12 14:01:01 crc kubenswrapper[4778]: I0312 14:01:01.236272 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555401-vjgkl" event={"ID":"e4df6927-3452-4b36-b59a-a1fdcd4272a4","Type":"ContainerStarted","Data":"9086b9928613830092038a1fc1873e5d9952f8b26aed48c1a8aeece2a4bbfb3a"} Mar 12 14:01:01 crc kubenswrapper[4778]: I0312 14:01:01.236318 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555401-vjgkl" event={"ID":"e4df6927-3452-4b36-b59a-a1fdcd4272a4","Type":"ContainerStarted","Data":"bf78dfb35e03fa1908b33d1f483749337571e17bce7c98498613e80b293582e2"} Mar 12 14:01:03 crc kubenswrapper[4778]: I0312 14:01:03.255980 4778 generic.go:334] "Generic (PLEG): container finished" podID="e4df6927-3452-4b36-b59a-a1fdcd4272a4" containerID="9086b9928613830092038a1fc1873e5d9952f8b26aed48c1a8aeece2a4bbfb3a" exitCode=0 Mar 12 14:01:03 crc kubenswrapper[4778]: I0312 14:01:03.256042 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555401-vjgkl" event={"ID":"e4df6927-3452-4b36-b59a-a1fdcd4272a4","Type":"ContainerDied","Data":"9086b9928613830092038a1fc1873e5d9952f8b26aed48c1a8aeece2a4bbfb3a"} Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.604822 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.627161 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-config-data\") pod \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.627281 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb9xz\" (UniqueName: \"kubernetes.io/projected/e4df6927-3452-4b36-b59a-a1fdcd4272a4-kube-api-access-lb9xz\") pod \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.627363 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-fernet-keys\") pod \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.627556 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-combined-ca-bundle\") pod \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\" (UID: \"e4df6927-3452-4b36-b59a-a1fdcd4272a4\") " Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.633394 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e4df6927-3452-4b36-b59a-a1fdcd4272a4" (UID: "e4df6927-3452-4b36-b59a-a1fdcd4272a4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.634041 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4df6927-3452-4b36-b59a-a1fdcd4272a4-kube-api-access-lb9xz" (OuterVolumeSpecName: "kube-api-access-lb9xz") pod "e4df6927-3452-4b36-b59a-a1fdcd4272a4" (UID: "e4df6927-3452-4b36-b59a-a1fdcd4272a4"). InnerVolumeSpecName "kube-api-access-lb9xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.663385 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4df6927-3452-4b36-b59a-a1fdcd4272a4" (UID: "e4df6927-3452-4b36-b59a-a1fdcd4272a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.688177 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-config-data" (OuterVolumeSpecName: "config-data") pod "e4df6927-3452-4b36-b59a-a1fdcd4272a4" (UID: "e4df6927-3452-4b36-b59a-a1fdcd4272a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.731335 4778 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.731373 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.731386 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4df6927-3452-4b36-b59a-a1fdcd4272a4-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 14:01:04 crc kubenswrapper[4778]: I0312 14:01:04.731397 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb9xz\" (UniqueName: \"kubernetes.io/projected/e4df6927-3452-4b36-b59a-a1fdcd4272a4-kube-api-access-lb9xz\") on node \"crc\" DevicePath \"\"" Mar 12 14:01:05 crc kubenswrapper[4778]: I0312 14:01:05.275140 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555401-vjgkl" event={"ID":"e4df6927-3452-4b36-b59a-a1fdcd4272a4","Type":"ContainerDied","Data":"bf78dfb35e03fa1908b33d1f483749337571e17bce7c98498613e80b293582e2"} Mar 12 14:01:05 crc kubenswrapper[4778]: I0312 14:01:05.275753 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf78dfb35e03fa1908b33d1f483749337571e17bce7c98498613e80b293582e2" Mar 12 14:01:05 crc kubenswrapper[4778]: I0312 14:01:05.275438 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555401-vjgkl" Mar 12 14:01:13 crc kubenswrapper[4778]: I0312 14:01:13.253963 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:01:13 crc kubenswrapper[4778]: E0312 14:01:13.254940 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:01:27 crc kubenswrapper[4778]: I0312 14:01:27.254681 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:01:27 crc kubenswrapper[4778]: E0312 14:01:27.255488 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.920335 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 14:01:33 crc kubenswrapper[4778]: E0312 14:01:33.921327 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4df6927-3452-4b36-b59a-a1fdcd4272a4" containerName="keystone-cron" Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.921345 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4df6927-3452-4b36-b59a-a1fdcd4272a4" containerName="keystone-cron" Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.921576 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4df6927-3452-4b36-b59a-a1fdcd4272a4" containerName="keystone-cron" Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.922358 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.926060 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-s8dkq" Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.926117 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.926218 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.938756 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 14:01:33 crc kubenswrapper[4778]: I0312 14:01:33.942167 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.012075 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.012211 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.012246 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.013205 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.013341 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.013378 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-config-data\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.013432 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ssdw\" (UniqueName: \"kubernetes.io/projected/74897d0a-ca7b-4589-bd4c-75910c2d491c-kube-api-access-4ssdw\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.013485 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.013668 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116012 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116073 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116132 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116165 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116280 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116320 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116343 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-config-data\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116383 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ssdw\" (UniqueName: \"kubernetes.io/projected/74897d0a-ca7b-4589-bd4c-75910c2d491c-kube-api-access-4ssdw\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116416 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.116900 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.117094 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.117769 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.117942 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.118066 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-config-data\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.136009 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.136353 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.143133 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.146712 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ssdw\" (UniqueName: \"kubernetes.io/projected/74897d0a-ca7b-4589-bd4c-75910c2d491c-kube-api-access-4ssdw\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.166800 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.257641 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.805741 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 12 14:01:34 crc kubenswrapper[4778]: I0312 14:01:34.813399 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:01:35 crc kubenswrapper[4778]: I0312 14:01:35.544887 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"74897d0a-ca7b-4589-bd4c-75910c2d491c","Type":"ContainerStarted","Data":"454ca901956127a4048551d166d33c00269e2d8a18f508b4b327654529c385c0"} Mar 12 14:01:42 crc kubenswrapper[4778]: I0312 14:01:42.271074 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:01:42 crc kubenswrapper[4778]: E0312 14:01:42.272452 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:01:53 crc kubenswrapper[4778]: I0312 14:01:53.254141 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:01:53 crc kubenswrapper[4778]: E0312 14:01:53.254688 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.159558 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555402-xtt9v"] Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.161671 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555402-xtt9v" Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.164336 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.165318 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.166475 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.171549 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555402-xtt9v"] Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.298963 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99gxb\" (UniqueName: \"kubernetes.io/projected/be917952-7177-4ef5-9efa-7858d1a11ded-kube-api-access-99gxb\") pod \"auto-csr-approver-29555402-xtt9v\" (UID: \"be917952-7177-4ef5-9efa-7858d1a11ded\") " pod="openshift-infra/auto-csr-approver-29555402-xtt9v" Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.401026 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99gxb\" (UniqueName: \"kubernetes.io/projected/be917952-7177-4ef5-9efa-7858d1a11ded-kube-api-access-99gxb\") pod \"auto-csr-approver-29555402-xtt9v\" (UID: \"be917952-7177-4ef5-9efa-7858d1a11ded\") " pod="openshift-infra/auto-csr-approver-29555402-xtt9v" Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.428261 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99gxb\" (UniqueName: \"kubernetes.io/projected/be917952-7177-4ef5-9efa-7858d1a11ded-kube-api-access-99gxb\") pod \"auto-csr-approver-29555402-xtt9v\" (UID: \"be917952-7177-4ef5-9efa-7858d1a11ded\") " pod="openshift-infra/auto-csr-approver-29555402-xtt9v" Mar 12 14:02:00 crc kubenswrapper[4778]: I0312 14:02:00.492418 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555402-xtt9v" Mar 12 14:02:04 crc kubenswrapper[4778]: E0312 14:02:04.854373 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 12 14:02:04 crc kubenswrapper[4778]: E0312 14:02:04.854991 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ssdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(74897d0a-ca7b-4589-bd4c-75910c2d491c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 12 14:02:04 crc kubenswrapper[4778]: E0312 14:02:04.858282 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="74897d0a-ca7b-4589-bd4c-75910c2d491c" Mar 12 14:02:05 crc kubenswrapper[4778]: I0312 14:02:05.208705 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555402-xtt9v"] Mar 12 14:02:05 crc kubenswrapper[4778]: I0312 14:02:05.834460 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555402-xtt9v" event={"ID":"be917952-7177-4ef5-9efa-7858d1a11ded","Type":"ContainerStarted","Data":"af14200afd084ab4265094e5fc43e707cf661ff1277fb7f0a1a629d498a2dca8"} Mar 12 14:02:05 crc kubenswrapper[4778]: E0312 14:02:05.836230 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="74897d0a-ca7b-4589-bd4c-75910c2d491c" Mar 12 14:02:06 crc kubenswrapper[4778]: I0312 14:02:06.254987 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:02:06 crc kubenswrapper[4778]: E0312 14:02:06.255584 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:02:07 crc kubenswrapper[4778]: I0312 14:02:07.856840 4778 generic.go:334] "Generic (PLEG): container finished" podID="be917952-7177-4ef5-9efa-7858d1a11ded" containerID="b626545edbe9764de0b916e68f0836b92c6dbff05d2ae4f9ae924f063217aca7" exitCode=0 Mar 12 14:02:07 crc kubenswrapper[4778]: I0312 14:02:07.856982 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555402-xtt9v" event={"ID":"be917952-7177-4ef5-9efa-7858d1a11ded","Type":"ContainerDied","Data":"b626545edbe9764de0b916e68f0836b92c6dbff05d2ae4f9ae924f063217aca7"} Mar 12 14:02:09 crc kubenswrapper[4778]: I0312 14:02:09.189268 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555402-xtt9v" Mar 12 14:02:09 crc kubenswrapper[4778]: I0312 14:02:09.226595 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99gxb\" (UniqueName: \"kubernetes.io/projected/be917952-7177-4ef5-9efa-7858d1a11ded-kube-api-access-99gxb\") pod \"be917952-7177-4ef5-9efa-7858d1a11ded\" (UID: \"be917952-7177-4ef5-9efa-7858d1a11ded\") " Mar 12 14:02:09 crc kubenswrapper[4778]: I0312 14:02:09.232884 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be917952-7177-4ef5-9efa-7858d1a11ded-kube-api-access-99gxb" (OuterVolumeSpecName: "kube-api-access-99gxb") pod "be917952-7177-4ef5-9efa-7858d1a11ded" (UID: "be917952-7177-4ef5-9efa-7858d1a11ded"). InnerVolumeSpecName "kube-api-access-99gxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:02:09 crc kubenswrapper[4778]: I0312 14:02:09.329201 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99gxb\" (UniqueName: \"kubernetes.io/projected/be917952-7177-4ef5-9efa-7858d1a11ded-kube-api-access-99gxb\") on node \"crc\" DevicePath \"\"" Mar 12 14:02:09 crc kubenswrapper[4778]: I0312 14:02:09.873968 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555402-xtt9v" event={"ID":"be917952-7177-4ef5-9efa-7858d1a11ded","Type":"ContainerDied","Data":"af14200afd084ab4265094e5fc43e707cf661ff1277fb7f0a1a629d498a2dca8"} Mar 12 14:02:09 crc kubenswrapper[4778]: I0312 14:02:09.874269 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af14200afd084ab4265094e5fc43e707cf661ff1277fb7f0a1a629d498a2dca8" Mar 12 14:02:09 crc kubenswrapper[4778]: I0312 14:02:09.874026 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555402-xtt9v" Mar 12 14:02:10 crc kubenswrapper[4778]: I0312 14:02:10.264715 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555396-lhqkd"] Mar 12 14:02:10 crc kubenswrapper[4778]: I0312 14:02:10.275575 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555396-lhqkd"] Mar 12 14:02:12 crc kubenswrapper[4778]: I0312 14:02:12.265811 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b32527-d7b2-4938-a8c2-882067947e78" path="/var/lib/kubelet/pods/90b32527-d7b2-4938-a8c2-882067947e78/volumes" Mar 12 14:02:17 crc kubenswrapper[4778]: I0312 14:02:17.254514 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:02:17 crc kubenswrapper[4778]: E0312 14:02:17.255785 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:02:18 crc kubenswrapper[4778]: I0312 14:02:18.976201 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 12 14:02:20 crc kubenswrapper[4778]: I0312 14:02:20.350743 4778 scope.go:117] "RemoveContainer" containerID="f6e775ed356b4c920e47d4cd6b52c164df8562cf9b83a71ba23edcf8ae60ceb9" Mar 12 14:02:20 crc kubenswrapper[4778]: I0312 14:02:20.974486 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"74897d0a-ca7b-4589-bd4c-75910c2d491c","Type":"ContainerStarted","Data":"04824fe8df9ecfce713c8136bfb0516b3d49f4264b49ad91474ebd09ae740d91"} Mar 12 14:02:21 crc kubenswrapper[4778]: I0312 14:02:21.005330 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.845323112 podStartE2EDuration="49.005308291s" podCreationTimestamp="2026-03-12 14:01:32 +0000 UTC" firstStartedPulling="2026-03-12 14:01:34.813214932 +0000 UTC m=+3113.261910328" lastFinishedPulling="2026-03-12 14:02:18.973200101 +0000 UTC m=+3157.421895507" observedRunningTime="2026-03-12 14:02:20.995552244 +0000 UTC m=+3159.444247640" watchObservedRunningTime="2026-03-12 14:02:21.005308291 +0000 UTC m=+3159.454003687" Mar 12 14:02:31 crc kubenswrapper[4778]: I0312 14:02:31.253745 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:02:31 crc kubenswrapper[4778]: E0312 14:02:31.254625 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:02:46 crc kubenswrapper[4778]: I0312 14:02:46.254387 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:02:46 crc kubenswrapper[4778]: E0312 14:02:46.255135 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:03:00 crc kubenswrapper[4778]: I0312 14:03:00.253836 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:03:00 crc kubenswrapper[4778]: E0312 14:03:00.254684 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:03:13 crc kubenswrapper[4778]: I0312 14:03:13.253742 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:03:13 crc kubenswrapper[4778]: E0312 14:03:13.254528 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:03:25 crc kubenswrapper[4778]: I0312 14:03:25.255483 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:03:25 crc kubenswrapper[4778]: E0312 14:03:25.256428 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:03:40 crc kubenswrapper[4778]: I0312 14:03:40.254380 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:03:40 crc kubenswrapper[4778]: E0312 14:03:40.255079 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:03:55 crc kubenswrapper[4778]: I0312 14:03:55.254213 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:03:55 crc kubenswrapper[4778]: E0312 14:03:55.254882 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.176765 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555404-jwm56"] Mar 12 14:04:00 crc kubenswrapper[4778]: E0312 14:04:00.177933 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be917952-7177-4ef5-9efa-7858d1a11ded" containerName="oc" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.177950 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="be917952-7177-4ef5-9efa-7858d1a11ded" containerName="oc" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.178235 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="be917952-7177-4ef5-9efa-7858d1a11ded" containerName="oc" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.179095 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555404-jwm56" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.181320 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.181653 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.182164 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.194345 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555404-jwm56"] Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.241386 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4zdm\" (UniqueName: \"kubernetes.io/projected/67604e51-359f-4c7f-b7df-a4f215a87085-kube-api-access-j4zdm\") pod \"auto-csr-approver-29555404-jwm56\" (UID: \"67604e51-359f-4c7f-b7df-a4f215a87085\") " pod="openshift-infra/auto-csr-approver-29555404-jwm56" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.343894 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4zdm\" (UniqueName: \"kubernetes.io/projected/67604e51-359f-4c7f-b7df-a4f215a87085-kube-api-access-j4zdm\") pod \"auto-csr-approver-29555404-jwm56\" (UID: \"67604e51-359f-4c7f-b7df-a4f215a87085\") " pod="openshift-infra/auto-csr-approver-29555404-jwm56" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.370474 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4zdm\" (UniqueName: \"kubernetes.io/projected/67604e51-359f-4c7f-b7df-a4f215a87085-kube-api-access-j4zdm\") pod \"auto-csr-approver-29555404-jwm56\" (UID: \"67604e51-359f-4c7f-b7df-a4f215a87085\") " pod="openshift-infra/auto-csr-approver-29555404-jwm56" Mar 12 14:04:00 crc kubenswrapper[4778]: I0312 14:04:00.522937 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555404-jwm56" Mar 12 14:04:01 crc kubenswrapper[4778]: I0312 14:04:01.004013 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555404-jwm56"] Mar 12 14:04:01 crc kubenswrapper[4778]: I0312 14:04:01.841563 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555404-jwm56" event={"ID":"67604e51-359f-4c7f-b7df-a4f215a87085","Type":"ContainerStarted","Data":"1b601cd7deaebbb8c485b6a4bc8cce371566ad54d4f0487a2fe0ff8eb085f00b"} Mar 12 14:04:02 crc kubenswrapper[4778]: I0312 14:04:02.851195 4778 generic.go:334] "Generic (PLEG): container finished" podID="67604e51-359f-4c7f-b7df-a4f215a87085" containerID="67c3ac2c335344f6b2ef2e71132a310b8eda046527619858c13389f0ce08da63" exitCode=0 Mar 12 14:04:02 crc kubenswrapper[4778]: I0312 14:04:02.851382 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555404-jwm56" event={"ID":"67604e51-359f-4c7f-b7df-a4f215a87085","Type":"ContainerDied","Data":"67c3ac2c335344f6b2ef2e71132a310b8eda046527619858c13389f0ce08da63"} Mar 12 14:04:04 crc kubenswrapper[4778]: I0312 14:04:04.417945 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555404-jwm56" Mar 12 14:04:04 crc kubenswrapper[4778]: I0312 14:04:04.531230 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4zdm\" (UniqueName: \"kubernetes.io/projected/67604e51-359f-4c7f-b7df-a4f215a87085-kube-api-access-j4zdm\") pod \"67604e51-359f-4c7f-b7df-a4f215a87085\" (UID: \"67604e51-359f-4c7f-b7df-a4f215a87085\") " Mar 12 14:04:04 crc kubenswrapper[4778]: I0312 14:04:04.536460 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67604e51-359f-4c7f-b7df-a4f215a87085-kube-api-access-j4zdm" (OuterVolumeSpecName: "kube-api-access-j4zdm") pod "67604e51-359f-4c7f-b7df-a4f215a87085" (UID: "67604e51-359f-4c7f-b7df-a4f215a87085"). InnerVolumeSpecName "kube-api-access-j4zdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:04:04 crc kubenswrapper[4778]: I0312 14:04:04.633973 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4zdm\" (UniqueName: \"kubernetes.io/projected/67604e51-359f-4c7f-b7df-a4f215a87085-kube-api-access-j4zdm\") on node \"crc\" DevicePath \"\"" Mar 12 14:04:04 crc kubenswrapper[4778]: I0312 14:04:04.898274 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555404-jwm56" event={"ID":"67604e51-359f-4c7f-b7df-a4f215a87085","Type":"ContainerDied","Data":"1b601cd7deaebbb8c485b6a4bc8cce371566ad54d4f0487a2fe0ff8eb085f00b"} Mar 12 14:04:04 crc kubenswrapper[4778]: I0312 14:04:04.898322 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b601cd7deaebbb8c485b6a4bc8cce371566ad54d4f0487a2fe0ff8eb085f00b" Mar 12 14:04:04 crc kubenswrapper[4778]: I0312 14:04:04.898350 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555404-jwm56" Mar 12 14:04:05 crc kubenswrapper[4778]: I0312 14:04:05.484591 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555398-hhchd"] Mar 12 14:04:05 crc kubenswrapper[4778]: I0312 14:04:05.494064 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555398-hhchd"] Mar 12 14:04:06 crc kubenswrapper[4778]: I0312 14:04:06.265735 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a479324-f9a1-4095-a0b1-7c22fc72eb61" path="/var/lib/kubelet/pods/6a479324-f9a1-4095-a0b1-7c22fc72eb61/volumes" Mar 12 14:04:10 crc kubenswrapper[4778]: I0312 14:04:10.254231 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:04:10 crc kubenswrapper[4778]: E0312 14:04:10.254894 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:04:20 crc kubenswrapper[4778]: I0312 14:04:20.469655 4778 scope.go:117] "RemoveContainer" containerID="7f07e770195234611f35ac5fc4d8c046a4e07dd2e554f881a1c216e51689e210" Mar 12 14:04:22 crc kubenswrapper[4778]: I0312 14:04:22.260421 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:04:22 crc kubenswrapper[4778]: E0312 14:04:22.261899 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:04:34 crc kubenswrapper[4778]: I0312 14:04:34.254053 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:04:35 crc kubenswrapper[4778]: I0312 14:04:35.183424 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"84eb4f64f5e57ea7581e624359f9a06ffee621fbf6407e2f32f007351966b81b"} Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.512919 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-452rn"] Mar 12 14:05:01 crc kubenswrapper[4778]: E0312 14:05:01.513885 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67604e51-359f-4c7f-b7df-a4f215a87085" containerName="oc" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.513899 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="67604e51-359f-4c7f-b7df-a4f215a87085" containerName="oc" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.514120 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="67604e51-359f-4c7f-b7df-a4f215a87085" containerName="oc" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.515655 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.525169 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-452rn"] Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.557837 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-catalog-content\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.558168 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pl6b\" (UniqueName: \"kubernetes.io/projected/3cb5f22f-5751-44aa-9532-51d2f950ee49-kube-api-access-7pl6b\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.558341 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-utilities\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.659986 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-catalog-content\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.660121 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pl6b\" (UniqueName: \"kubernetes.io/projected/3cb5f22f-5751-44aa-9532-51d2f950ee49-kube-api-access-7pl6b\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.660155 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-utilities\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.660607 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-catalog-content\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.660645 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-utilities\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.683018 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pl6b\" (UniqueName: \"kubernetes.io/projected/3cb5f22f-5751-44aa-9532-51d2f950ee49-kube-api-access-7pl6b\") pod \"certified-operators-452rn\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:01 crc kubenswrapper[4778]: I0312 14:05:01.835862 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.370155 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-452rn"] Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.515164 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-79r2g"] Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.517971 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.534431 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79r2g"] Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.559899 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-452rn" event={"ID":"3cb5f22f-5751-44aa-9532-51d2f950ee49","Type":"ContainerStarted","Data":"4c152f196c68c9f25ad019e5ad6afcb88abbcc0e6d071be4e2c4f328dcfe1a46"} Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.584585 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-utilities\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.587284 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6kfs\" (UniqueName: \"kubernetes.io/projected/799d5e51-87fe-402d-9189-d2c430d8225c-kube-api-access-n6kfs\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.587502 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-catalog-content\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.689065 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6kfs\" (UniqueName: \"kubernetes.io/projected/799d5e51-87fe-402d-9189-d2c430d8225c-kube-api-access-n6kfs\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.689170 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-catalog-content\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.689303 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-utilities\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.689676 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-catalog-content\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.689771 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-utilities\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.714321 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6kfs\" (UniqueName: \"kubernetes.io/projected/799d5e51-87fe-402d-9189-d2c430d8225c-kube-api-access-n6kfs\") pod \"community-operators-79r2g\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:02 crc kubenswrapper[4778]: I0312 14:05:02.878417 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:03 crc kubenswrapper[4778]: I0312 14:05:03.404088 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79r2g"] Mar 12 14:05:03 crc kubenswrapper[4778]: I0312 14:05:03.570623 4778 generic.go:334] "Generic (PLEG): container finished" podID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerID="fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b" exitCode=0 Mar 12 14:05:03 crc kubenswrapper[4778]: I0312 14:05:03.570807 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-452rn" event={"ID":"3cb5f22f-5751-44aa-9532-51d2f950ee49","Type":"ContainerDied","Data":"fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b"} Mar 12 14:05:03 crc kubenswrapper[4778]: I0312 14:05:03.572277 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79r2g" event={"ID":"799d5e51-87fe-402d-9189-d2c430d8225c","Type":"ContainerStarted","Data":"24c962fb193a47f0c7153106b10da358aafd6cfe8775f1e11077c86cd86f8779"} Mar 12 14:05:04 crc kubenswrapper[4778]: I0312 14:05:04.583463 4778 generic.go:334] "Generic (PLEG): container finished" podID="799d5e51-87fe-402d-9189-d2c430d8225c" containerID="5311e6c99490119871346b089342ebf7ab5a0ba98c0dc3500c748eb884a5609f" exitCode=0 Mar 12 14:05:04 crc kubenswrapper[4778]: I0312 14:05:04.583552 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79r2g" event={"ID":"799d5e51-87fe-402d-9189-d2c430d8225c","Type":"ContainerDied","Data":"5311e6c99490119871346b089342ebf7ab5a0ba98c0dc3500c748eb884a5609f"} Mar 12 14:05:05 crc kubenswrapper[4778]: I0312 14:05:05.598941 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-452rn" event={"ID":"3cb5f22f-5751-44aa-9532-51d2f950ee49","Type":"ContainerStarted","Data":"6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60"} Mar 12 14:05:06 crc kubenswrapper[4778]: I0312 14:05:06.609760 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79r2g" event={"ID":"799d5e51-87fe-402d-9189-d2c430d8225c","Type":"ContainerStarted","Data":"df1ceb7dfdb20eb8574a75ba7a0b1ae7824eb1149a2e784e4f635dced47b8167"} Mar 12 14:05:08 crc kubenswrapper[4778]: I0312 14:05:08.680789 4778 generic.go:334] "Generic (PLEG): container finished" podID="799d5e51-87fe-402d-9189-d2c430d8225c" containerID="df1ceb7dfdb20eb8574a75ba7a0b1ae7824eb1149a2e784e4f635dced47b8167" exitCode=0 Mar 12 14:05:08 crc kubenswrapper[4778]: I0312 14:05:08.680887 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79r2g" event={"ID":"799d5e51-87fe-402d-9189-d2c430d8225c","Type":"ContainerDied","Data":"df1ceb7dfdb20eb8574a75ba7a0b1ae7824eb1149a2e784e4f635dced47b8167"} Mar 12 14:05:08 crc kubenswrapper[4778]: I0312 14:05:08.683889 4778 generic.go:334] "Generic (PLEG): container finished" podID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerID="6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60" exitCode=0 Mar 12 14:05:08 crc kubenswrapper[4778]: I0312 14:05:08.683920 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-452rn" event={"ID":"3cb5f22f-5751-44aa-9532-51d2f950ee49","Type":"ContainerDied","Data":"6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60"} Mar 12 14:05:09 crc kubenswrapper[4778]: I0312 14:05:09.697007 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79r2g" event={"ID":"799d5e51-87fe-402d-9189-d2c430d8225c","Type":"ContainerStarted","Data":"27772db6c5a7076fc1876873c5a4dffb045dd74854ff2e79c70cca97cebb207f"} Mar 12 14:05:09 crc kubenswrapper[4778]: I0312 14:05:09.700160 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-452rn" event={"ID":"3cb5f22f-5751-44aa-9532-51d2f950ee49","Type":"ContainerStarted","Data":"de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d"} Mar 12 14:05:09 crc kubenswrapper[4778]: I0312 14:05:09.719556 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-79r2g" podStartSLOduration=3.165486016 podStartE2EDuration="7.719535819s" podCreationTimestamp="2026-03-12 14:05:02 +0000 UTC" firstStartedPulling="2026-03-12 14:05:04.585838231 +0000 UTC m=+3323.034533627" lastFinishedPulling="2026-03-12 14:05:09.139888034 +0000 UTC m=+3327.588583430" observedRunningTime="2026-03-12 14:05:09.713839367 +0000 UTC m=+3328.162534763" watchObservedRunningTime="2026-03-12 14:05:09.719535819 +0000 UTC m=+3328.168231215" Mar 12 14:05:11 crc kubenswrapper[4778]: I0312 14:05:11.836908 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:11 crc kubenswrapper[4778]: I0312 14:05:11.837595 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:12 crc kubenswrapper[4778]: I0312 14:05:12.879252 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:12 crc kubenswrapper[4778]: I0312 14:05:12.879606 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:12 crc kubenswrapper[4778]: I0312 14:05:12.880981 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-452rn" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="registry-server" probeResult="failure" output=< Mar 12 14:05:12 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:05:12 crc kubenswrapper[4778]: > Mar 12 14:05:12 crc kubenswrapper[4778]: I0312 14:05:12.925387 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:12 crc kubenswrapper[4778]: I0312 14:05:12.950260 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-452rn" podStartSLOduration=6.272736369 podStartE2EDuration="11.950231393s" podCreationTimestamp="2026-03-12 14:05:01 +0000 UTC" firstStartedPulling="2026-03-12 14:05:03.573743273 +0000 UTC m=+3322.022438669" lastFinishedPulling="2026-03-12 14:05:09.251238297 +0000 UTC m=+3327.699933693" observedRunningTime="2026-03-12 14:05:09.734456443 +0000 UTC m=+3328.183151859" watchObservedRunningTime="2026-03-12 14:05:12.950231393 +0000 UTC m=+3331.398926789" Mar 12 14:05:21 crc kubenswrapper[4778]: I0312 14:05:21.914634 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:21 crc kubenswrapper[4778]: I0312 14:05:21.998812 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:22 crc kubenswrapper[4778]: I0312 14:05:22.172087 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-452rn"] Mar 12 14:05:22 crc kubenswrapper[4778]: I0312 14:05:22.924492 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:23 crc kubenswrapper[4778]: I0312 14:05:23.917352 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-452rn" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="registry-server" containerID="cri-o://de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d" gracePeriod=2 Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.569934 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.570375 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79r2g"] Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.570596 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-79r2g" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" containerName="registry-server" containerID="cri-o://27772db6c5a7076fc1876873c5a4dffb045dd74854ff2e79c70cca97cebb207f" gracePeriod=2 Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.669247 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-catalog-content\") pod \"3cb5f22f-5751-44aa-9532-51d2f950ee49\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.669455 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-utilities\") pod \"3cb5f22f-5751-44aa-9532-51d2f950ee49\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.669510 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pl6b\" (UniqueName: \"kubernetes.io/projected/3cb5f22f-5751-44aa-9532-51d2f950ee49-kube-api-access-7pl6b\") pod \"3cb5f22f-5751-44aa-9532-51d2f950ee49\" (UID: \"3cb5f22f-5751-44aa-9532-51d2f950ee49\") " Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.670505 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-utilities" (OuterVolumeSpecName: "utilities") pod "3cb5f22f-5751-44aa-9532-51d2f950ee49" (UID: "3cb5f22f-5751-44aa-9532-51d2f950ee49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.699281 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb5f22f-5751-44aa-9532-51d2f950ee49-kube-api-access-7pl6b" (OuterVolumeSpecName: "kube-api-access-7pl6b") pod "3cb5f22f-5751-44aa-9532-51d2f950ee49" (UID: "3cb5f22f-5751-44aa-9532-51d2f950ee49"). InnerVolumeSpecName "kube-api-access-7pl6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.743516 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cb5f22f-5751-44aa-9532-51d2f950ee49" (UID: "3cb5f22f-5751-44aa-9532-51d2f950ee49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.771303 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pl6b\" (UniqueName: \"kubernetes.io/projected/3cb5f22f-5751-44aa-9532-51d2f950ee49-kube-api-access-7pl6b\") on node \"crc\" DevicePath \"\"" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.771333 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.771346 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cb5f22f-5751-44aa-9532-51d2f950ee49-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.931263 4778 generic.go:334] "Generic (PLEG): container finished" podID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerID="de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d" exitCode=0 Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.931364 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-452rn" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.931384 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-452rn" event={"ID":"3cb5f22f-5751-44aa-9532-51d2f950ee49","Type":"ContainerDied","Data":"de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d"} Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.931458 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-452rn" event={"ID":"3cb5f22f-5751-44aa-9532-51d2f950ee49","Type":"ContainerDied","Data":"4c152f196c68c9f25ad019e5ad6afcb88abbcc0e6d071be4e2c4f328dcfe1a46"} Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.931486 4778 scope.go:117] "RemoveContainer" containerID="de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d" Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.934668 4778 generic.go:334] "Generic (PLEG): container finished" podID="799d5e51-87fe-402d-9189-d2c430d8225c" containerID="27772db6c5a7076fc1876873c5a4dffb045dd74854ff2e79c70cca97cebb207f" exitCode=0 Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.934703 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79r2g" event={"ID":"799d5e51-87fe-402d-9189-d2c430d8225c","Type":"ContainerDied","Data":"27772db6c5a7076fc1876873c5a4dffb045dd74854ff2e79c70cca97cebb207f"} Mar 12 14:05:24 crc kubenswrapper[4778]: I0312 14:05:24.963747 4778 scope.go:117] "RemoveContainer" containerID="6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.011100 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-452rn"] Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.020555 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-452rn"] Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.047639 4778 scope.go:117] "RemoveContainer" containerID="fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.095867 4778 scope.go:117] "RemoveContainer" containerID="de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d" Mar 12 14:05:25 crc kubenswrapper[4778]: E0312 14:05:25.096746 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d\": container with ID starting with de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d not found: ID does not exist" containerID="de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.096815 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d"} err="failed to get container status \"de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d\": rpc error: code = NotFound desc = could not find container \"de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d\": container with ID starting with de577889462420ffee500a8838f5146588348b4a1a1dbbd50304bd68fabfdd7d not found: ID does not exist" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.096845 4778 scope.go:117] "RemoveContainer" containerID="6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60" Mar 12 14:05:25 crc kubenswrapper[4778]: E0312 14:05:25.097329 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60\": container with ID starting with 6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60 not found: ID does not exist" containerID="6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.097352 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60"} err="failed to get container status \"6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60\": rpc error: code = NotFound desc = could not find container \"6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60\": container with ID starting with 6089ff4895f75a6eaae315a037e64eff2551aa8e0e865f56674c8421758bee60 not found: ID does not exist" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.097370 4778 scope.go:117] "RemoveContainer" containerID="fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b" Mar 12 14:05:25 crc kubenswrapper[4778]: E0312 14:05:25.098308 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b\": container with ID starting with fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b not found: ID does not exist" containerID="fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.098334 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b"} err="failed to get container status \"fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b\": rpc error: code = NotFound desc = could not find container \"fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b\": container with ID starting with fe41bd24a0ab71cebcec1476a92f57f89206d945a71642088676557cf061ae1b not found: ID does not exist" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.306434 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.395638 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-utilities\") pod \"799d5e51-87fe-402d-9189-d2c430d8225c\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.395855 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6kfs\" (UniqueName: \"kubernetes.io/projected/799d5e51-87fe-402d-9189-d2c430d8225c-kube-api-access-n6kfs\") pod \"799d5e51-87fe-402d-9189-d2c430d8225c\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.395995 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-catalog-content\") pod \"799d5e51-87fe-402d-9189-d2c430d8225c\" (UID: \"799d5e51-87fe-402d-9189-d2c430d8225c\") " Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.396650 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-utilities" (OuterVolumeSpecName: "utilities") pod "799d5e51-87fe-402d-9189-d2c430d8225c" (UID: "799d5e51-87fe-402d-9189-d2c430d8225c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.396933 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.409639 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799d5e51-87fe-402d-9189-d2c430d8225c-kube-api-access-n6kfs" (OuterVolumeSpecName: "kube-api-access-n6kfs") pod "799d5e51-87fe-402d-9189-d2c430d8225c" (UID: "799d5e51-87fe-402d-9189-d2c430d8225c"). InnerVolumeSpecName "kube-api-access-n6kfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.453880 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "799d5e51-87fe-402d-9189-d2c430d8225c" (UID: "799d5e51-87fe-402d-9189-d2c430d8225c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.499459 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6kfs\" (UniqueName: \"kubernetes.io/projected/799d5e51-87fe-402d-9189-d2c430d8225c-kube-api-access-n6kfs\") on node \"crc\" DevicePath \"\"" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.499508 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799d5e51-87fe-402d-9189-d2c430d8225c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.950265 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79r2g" event={"ID":"799d5e51-87fe-402d-9189-d2c430d8225c","Type":"ContainerDied","Data":"24c962fb193a47f0c7153106b10da358aafd6cfe8775f1e11077c86cd86f8779"} Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.950328 4778 scope.go:117] "RemoveContainer" containerID="27772db6c5a7076fc1876873c5a4dffb045dd74854ff2e79c70cca97cebb207f" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.951341 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79r2g" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.975295 4778 scope.go:117] "RemoveContainer" containerID="df1ceb7dfdb20eb8574a75ba7a0b1ae7824eb1149a2e784e4f635dced47b8167" Mar 12 14:05:25 crc kubenswrapper[4778]: I0312 14:05:25.998813 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79r2g"] Mar 12 14:05:26 crc kubenswrapper[4778]: I0312 14:05:26.007736 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-79r2g"] Mar 12 14:05:26 crc kubenswrapper[4778]: I0312 14:05:26.007791 4778 scope.go:117] "RemoveContainer" containerID="5311e6c99490119871346b089342ebf7ab5a0ba98c0dc3500c748eb884a5609f" Mar 12 14:05:26 crc kubenswrapper[4778]: I0312 14:05:26.266154 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" path="/var/lib/kubelet/pods/3cb5f22f-5751-44aa-9532-51d2f950ee49/volumes" Mar 12 14:05:26 crc kubenswrapper[4778]: I0312 14:05:26.267299 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" path="/var/lib/kubelet/pods/799d5e51-87fe-402d-9189-d2c430d8225c/volumes" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.159065 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555406-44v2c"] Mar 12 14:06:00 crc kubenswrapper[4778]: E0312 14:06:00.160089 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="extract-utilities" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.160104 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="extract-utilities" Mar 12 14:06:00 crc kubenswrapper[4778]: E0312 14:06:00.160118 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" containerName="registry-server" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.160125 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" containerName="registry-server" Mar 12 14:06:00 crc kubenswrapper[4778]: E0312 14:06:00.160134 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="registry-server" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.160142 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="registry-server" Mar 12 14:06:00 crc kubenswrapper[4778]: E0312 14:06:00.160163 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="extract-content" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.160169 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="extract-content" Mar 12 14:06:00 crc kubenswrapper[4778]: E0312 14:06:00.160177 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" containerName="extract-utilities" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.160199 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" containerName="extract-utilities" Mar 12 14:06:00 crc kubenswrapper[4778]: E0312 14:06:00.160212 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" containerName="extract-content" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.160217 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" containerName="extract-content" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.160415 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb5f22f-5751-44aa-9532-51d2f950ee49" containerName="registry-server" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.160438 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="799d5e51-87fe-402d-9189-d2c430d8225c" containerName="registry-server" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.161122 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555406-44v2c" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.165215 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.165530 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.167554 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.169730 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555406-44v2c"] Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.255227 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dsrm\" (UniqueName: \"kubernetes.io/projected/2e7c143c-4173-450a-afa1-587a3927f2d4-kube-api-access-2dsrm\") pod \"auto-csr-approver-29555406-44v2c\" (UID: \"2e7c143c-4173-450a-afa1-587a3927f2d4\") " pod="openshift-infra/auto-csr-approver-29555406-44v2c" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.357715 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dsrm\" (UniqueName: \"kubernetes.io/projected/2e7c143c-4173-450a-afa1-587a3927f2d4-kube-api-access-2dsrm\") pod \"auto-csr-approver-29555406-44v2c\" (UID: \"2e7c143c-4173-450a-afa1-587a3927f2d4\") " pod="openshift-infra/auto-csr-approver-29555406-44v2c" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.387593 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dsrm\" (UniqueName: \"kubernetes.io/projected/2e7c143c-4173-450a-afa1-587a3927f2d4-kube-api-access-2dsrm\") pod \"auto-csr-approver-29555406-44v2c\" (UID: \"2e7c143c-4173-450a-afa1-587a3927f2d4\") " pod="openshift-infra/auto-csr-approver-29555406-44v2c" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.481653 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555406-44v2c" Mar 12 14:06:00 crc kubenswrapper[4778]: I0312 14:06:00.967645 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555406-44v2c"] Mar 12 14:06:01 crc kubenswrapper[4778]: I0312 14:06:01.262198 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555406-44v2c" event={"ID":"2e7c143c-4173-450a-afa1-587a3927f2d4","Type":"ContainerStarted","Data":"0369ea96b618e30848880d0734baef647a7b442bbb78ff57eb0d051a016f4603"} Mar 12 14:06:03 crc kubenswrapper[4778]: I0312 14:06:03.279811 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555406-44v2c" event={"ID":"2e7c143c-4173-450a-afa1-587a3927f2d4","Type":"ContainerStarted","Data":"2c9bf5717fd9b2c8602b788cdc193d4c283ef18c6a74310ea29b1e044df19e27"} Mar 12 14:06:03 crc kubenswrapper[4778]: I0312 14:06:03.296094 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555406-44v2c" podStartSLOduration=1.401232633 podStartE2EDuration="3.296079335s" podCreationTimestamp="2026-03-12 14:06:00 +0000 UTC" firstStartedPulling="2026-03-12 14:06:00.959871276 +0000 UTC m=+3379.408566672" lastFinishedPulling="2026-03-12 14:06:02.854717978 +0000 UTC m=+3381.303413374" observedRunningTime="2026-03-12 14:06:03.292547554 +0000 UTC m=+3381.741242950" watchObservedRunningTime="2026-03-12 14:06:03.296079335 +0000 UTC m=+3381.744774731" Mar 12 14:06:04 crc kubenswrapper[4778]: I0312 14:06:04.306934 4778 generic.go:334] "Generic (PLEG): container finished" podID="2e7c143c-4173-450a-afa1-587a3927f2d4" containerID="2c9bf5717fd9b2c8602b788cdc193d4c283ef18c6a74310ea29b1e044df19e27" exitCode=0 Mar 12 14:06:04 crc kubenswrapper[4778]: I0312 14:06:04.307232 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555406-44v2c" event={"ID":"2e7c143c-4173-450a-afa1-587a3927f2d4","Type":"ContainerDied","Data":"2c9bf5717fd9b2c8602b788cdc193d4c283ef18c6a74310ea29b1e044df19e27"} Mar 12 14:06:05 crc kubenswrapper[4778]: I0312 14:06:05.850681 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555406-44v2c" Mar 12 14:06:05 crc kubenswrapper[4778]: I0312 14:06:05.982158 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dsrm\" (UniqueName: \"kubernetes.io/projected/2e7c143c-4173-450a-afa1-587a3927f2d4-kube-api-access-2dsrm\") pod \"2e7c143c-4173-450a-afa1-587a3927f2d4\" (UID: \"2e7c143c-4173-450a-afa1-587a3927f2d4\") " Mar 12 14:06:05 crc kubenswrapper[4778]: I0312 14:06:05.999395 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e7c143c-4173-450a-afa1-587a3927f2d4-kube-api-access-2dsrm" (OuterVolumeSpecName: "kube-api-access-2dsrm") pod "2e7c143c-4173-450a-afa1-587a3927f2d4" (UID: "2e7c143c-4173-450a-afa1-587a3927f2d4"). InnerVolumeSpecName "kube-api-access-2dsrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:06:06 crc kubenswrapper[4778]: I0312 14:06:06.086492 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dsrm\" (UniqueName: \"kubernetes.io/projected/2e7c143c-4173-450a-afa1-587a3927f2d4-kube-api-access-2dsrm\") on node \"crc\" DevicePath \"\"" Mar 12 14:06:06 crc kubenswrapper[4778]: I0312 14:06:06.325389 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555406-44v2c" event={"ID":"2e7c143c-4173-450a-afa1-587a3927f2d4","Type":"ContainerDied","Data":"0369ea96b618e30848880d0734baef647a7b442bbb78ff57eb0d051a016f4603"} Mar 12 14:06:06 crc kubenswrapper[4778]: I0312 14:06:06.325433 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0369ea96b618e30848880d0734baef647a7b442bbb78ff57eb0d051a016f4603" Mar 12 14:06:06 crc kubenswrapper[4778]: I0312 14:06:06.325460 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555406-44v2c" Mar 12 14:06:06 crc kubenswrapper[4778]: I0312 14:06:06.369847 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555400-c5pzt"] Mar 12 14:06:06 crc kubenswrapper[4778]: I0312 14:06:06.378535 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555400-c5pzt"] Mar 12 14:06:08 crc kubenswrapper[4778]: I0312 14:06:08.265496 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2425d74f-ef53-43bc-8c8f-976333a9cc6a" path="/var/lib/kubelet/pods/2425d74f-ef53-43bc-8c8f-976333a9cc6a/volumes" Mar 12 14:06:20 crc kubenswrapper[4778]: I0312 14:06:20.570611 4778 scope.go:117] "RemoveContainer" containerID="363f3ad00ca01b087e83fcbce9630716537dd1aa2dde624be9a2f51cfec1e8a6" Mar 12 14:06:58 crc kubenswrapper[4778]: I0312 14:06:58.557459 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:06:58 crc kubenswrapper[4778]: I0312 14:06:58.558022 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.068231 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9fgw2"] Mar 12 14:07:04 crc kubenswrapper[4778]: E0312 14:07:04.069464 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e7c143c-4173-450a-afa1-587a3927f2d4" containerName="oc" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.069500 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e7c143c-4173-450a-afa1-587a3927f2d4" containerName="oc" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.069788 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e7c143c-4173-450a-afa1-587a3927f2d4" containerName="oc" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.073669 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.108555 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fgw2"] Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.196636 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-utilities\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.196718 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkwf\" (UniqueName: \"kubernetes.io/projected/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-kube-api-access-stkwf\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.196788 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-catalog-content\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.299098 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-utilities\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.299324 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkwf\" (UniqueName: \"kubernetes.io/projected/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-kube-api-access-stkwf\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.299520 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-catalog-content\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.300036 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-utilities\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.300819 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-catalog-content\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.322442 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkwf\" (UniqueName: \"kubernetes.io/projected/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-kube-api-access-stkwf\") pod \"redhat-operators-9fgw2\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.404519 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.892433 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fgw2"] Mar 12 14:07:04 crc kubenswrapper[4778]: I0312 14:07:04.946025 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fgw2" event={"ID":"1169365c-cb69-43bd-9a4d-fcc7c2a467e1","Type":"ContainerStarted","Data":"c323eb083241a5c46e1a35088e34c27b0835efe1120e3b7aa0048e10bce8bed7"} Mar 12 14:07:05 crc kubenswrapper[4778]: I0312 14:07:05.960624 4778 generic.go:334] "Generic (PLEG): container finished" podID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerID="c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d" exitCode=0 Mar 12 14:07:05 crc kubenswrapper[4778]: I0312 14:07:05.960966 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fgw2" event={"ID":"1169365c-cb69-43bd-9a4d-fcc7c2a467e1","Type":"ContainerDied","Data":"c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d"} Mar 12 14:07:05 crc kubenswrapper[4778]: I0312 14:07:05.965041 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:07:07 crc kubenswrapper[4778]: I0312 14:07:07.981997 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fgw2" event={"ID":"1169365c-cb69-43bd-9a4d-fcc7c2a467e1","Type":"ContainerStarted","Data":"cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683"} Mar 12 14:07:10 crc kubenswrapper[4778]: I0312 14:07:09.999720 4778 generic.go:334] "Generic (PLEG): container finished" podID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerID="cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683" exitCode=0 Mar 12 14:07:10 crc kubenswrapper[4778]: I0312 14:07:09.999886 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fgw2" event={"ID":"1169365c-cb69-43bd-9a4d-fcc7c2a467e1","Type":"ContainerDied","Data":"cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683"} Mar 12 14:07:11 crc kubenswrapper[4778]: I0312 14:07:11.012736 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fgw2" event={"ID":"1169365c-cb69-43bd-9a4d-fcc7c2a467e1","Type":"ContainerStarted","Data":"13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0"} Mar 12 14:07:11 crc kubenswrapper[4778]: I0312 14:07:11.036474 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9fgw2" podStartSLOduration=2.358519127 podStartE2EDuration="7.036444774s" podCreationTimestamp="2026-03-12 14:07:04 +0000 UTC" firstStartedPulling="2026-03-12 14:07:05.964068828 +0000 UTC m=+3444.412764224" lastFinishedPulling="2026-03-12 14:07:10.641994475 +0000 UTC m=+3449.090689871" observedRunningTime="2026-03-12 14:07:11.035098796 +0000 UTC m=+3449.483794202" watchObservedRunningTime="2026-03-12 14:07:11.036444774 +0000 UTC m=+3449.485140170" Mar 12 14:07:14 crc kubenswrapper[4778]: I0312 14:07:14.404723 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:14 crc kubenswrapper[4778]: I0312 14:07:14.405892 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:15 crc kubenswrapper[4778]: I0312 14:07:15.459565 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9fgw2" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="registry-server" probeResult="failure" output=< Mar 12 14:07:15 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:07:15 crc kubenswrapper[4778]: > Mar 12 14:07:24 crc kubenswrapper[4778]: I0312 14:07:24.456917 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:24 crc kubenswrapper[4778]: I0312 14:07:24.508827 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:24 crc kubenswrapper[4778]: I0312 14:07:24.702019 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9fgw2"] Mar 12 14:07:26 crc kubenswrapper[4778]: I0312 14:07:26.145907 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9fgw2" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="registry-server" containerID="cri-o://13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0" gracePeriod=2 Mar 12 14:07:26 crc kubenswrapper[4778]: I0312 14:07:26.806625 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:26 crc kubenswrapper[4778]: I0312 14:07:26.997622 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stkwf\" (UniqueName: \"kubernetes.io/projected/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-kube-api-access-stkwf\") pod \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " Mar 12 14:07:26 crc kubenswrapper[4778]: I0312 14:07:26.997780 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-utilities\") pod \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " Mar 12 14:07:26 crc kubenswrapper[4778]: I0312 14:07:26.997800 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-catalog-content\") pod \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\" (UID: \"1169365c-cb69-43bd-9a4d-fcc7c2a467e1\") " Mar 12 14:07:26 crc kubenswrapper[4778]: I0312 14:07:26.999718 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-utilities" (OuterVolumeSpecName: "utilities") pod "1169365c-cb69-43bd-9a4d-fcc7c2a467e1" (UID: "1169365c-cb69-43bd-9a4d-fcc7c2a467e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.007604 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-kube-api-access-stkwf" (OuterVolumeSpecName: "kube-api-access-stkwf") pod "1169365c-cb69-43bd-9a4d-fcc7c2a467e1" (UID: "1169365c-cb69-43bd-9a4d-fcc7c2a467e1"). InnerVolumeSpecName "kube-api-access-stkwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.100324 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stkwf\" (UniqueName: \"kubernetes.io/projected/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-kube-api-access-stkwf\") on node \"crc\" DevicePath \"\"" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.100357 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.138013 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1169365c-cb69-43bd-9a4d-fcc7c2a467e1" (UID: "1169365c-cb69-43bd-9a4d-fcc7c2a467e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.157015 4778 generic.go:334] "Generic (PLEG): container finished" podID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerID="13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0" exitCode=0 Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.157058 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fgw2" event={"ID":"1169365c-cb69-43bd-9a4d-fcc7c2a467e1","Type":"ContainerDied","Data":"13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0"} Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.157085 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fgw2" event={"ID":"1169365c-cb69-43bd-9a4d-fcc7c2a467e1","Type":"ContainerDied","Data":"c323eb083241a5c46e1a35088e34c27b0835efe1120e3b7aa0048e10bce8bed7"} Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.157106 4778 scope.go:117] "RemoveContainer" containerID="13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.157114 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fgw2" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.186412 4778 scope.go:117] "RemoveContainer" containerID="cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.201548 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1169365c-cb69-43bd-9a4d-fcc7c2a467e1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.211957 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9fgw2"] Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.217477 4778 scope.go:117] "RemoveContainer" containerID="c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.222083 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9fgw2"] Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.254951 4778 scope.go:117] "RemoveContainer" containerID="13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0" Mar 12 14:07:27 crc kubenswrapper[4778]: E0312 14:07:27.255411 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0\": container with ID starting with 13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0 not found: ID does not exist" containerID="13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.255455 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0"} err="failed to get container status \"13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0\": rpc error: code = NotFound desc = could not find container \"13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0\": container with ID starting with 13fc93c1394603c467ef3707a22e1e5c3b4a3d019842a6cf1758b714f1863da0 not found: ID does not exist" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.255477 4778 scope.go:117] "RemoveContainer" containerID="cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683" Mar 12 14:07:27 crc kubenswrapper[4778]: E0312 14:07:27.256531 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683\": container with ID starting with cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683 not found: ID does not exist" containerID="cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.256573 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683"} err="failed to get container status \"cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683\": rpc error: code = NotFound desc = could not find container \"cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683\": container with ID starting with cb2c67312712b020bfef441f944b478e1c4b6ca687e4ed1e7e31fd2401b71683 not found: ID does not exist" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.256604 4778 scope.go:117] "RemoveContainer" containerID="c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d" Mar 12 14:07:27 crc kubenswrapper[4778]: E0312 14:07:27.256907 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d\": container with ID starting with c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d not found: ID does not exist" containerID="c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d" Mar 12 14:07:27 crc kubenswrapper[4778]: I0312 14:07:27.256934 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d"} err="failed to get container status \"c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d\": rpc error: code = NotFound desc = could not find container \"c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d\": container with ID starting with c1a67074c23faef29e037065d8226fda81b2b6cc308527fda75db7c9667ac65d not found: ID does not exist" Mar 12 14:07:28 crc kubenswrapper[4778]: I0312 14:07:28.264974 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" path="/var/lib/kubelet/pods/1169365c-cb69-43bd-9a4d-fcc7c2a467e1/volumes" Mar 12 14:07:28 crc kubenswrapper[4778]: I0312 14:07:28.558167 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:07:28 crc kubenswrapper[4778]: I0312 14:07:28.558341 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:07:58 crc kubenswrapper[4778]: I0312 14:07:58.557613 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:07:58 crc kubenswrapper[4778]: I0312 14:07:58.558176 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:07:58 crc kubenswrapper[4778]: I0312 14:07:58.558265 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:07:58 crc kubenswrapper[4778]: I0312 14:07:58.559118 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84eb4f64f5e57ea7581e624359f9a06ffee621fbf6407e2f32f007351966b81b"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:07:58 crc kubenswrapper[4778]: I0312 14:07:58.559181 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://84eb4f64f5e57ea7581e624359f9a06ffee621fbf6407e2f32f007351966b81b" gracePeriod=600 Mar 12 14:07:59 crc kubenswrapper[4778]: I0312 14:07:59.444841 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="84eb4f64f5e57ea7581e624359f9a06ffee621fbf6407e2f32f007351966b81b" exitCode=0 Mar 12 14:07:59 crc kubenswrapper[4778]: I0312 14:07:59.445124 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"84eb4f64f5e57ea7581e624359f9a06ffee621fbf6407e2f32f007351966b81b"} Mar 12 14:07:59 crc kubenswrapper[4778]: I0312 14:07:59.445156 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2"} Mar 12 14:07:59 crc kubenswrapper[4778]: I0312 14:07:59.445176 4778 scope.go:117] "RemoveContainer" containerID="264800b09f45ccd4290c89a1d8ecad1ba09b58524e636d065df86104736d56c0" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.152748 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555408-92kzm"] Mar 12 14:08:00 crc kubenswrapper[4778]: E0312 14:08:00.153837 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="registry-server" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.153859 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="registry-server" Mar 12 14:08:00 crc kubenswrapper[4778]: E0312 14:08:00.153876 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="extract-content" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.153887 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="extract-content" Mar 12 14:08:00 crc kubenswrapper[4778]: E0312 14:08:00.153920 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="extract-utilities" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.153933 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="extract-utilities" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.154610 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1169365c-cb69-43bd-9a4d-fcc7c2a467e1" containerName="registry-server" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.155928 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555408-92kzm" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.158597 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.158598 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.164258 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555408-92kzm"] Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.175109 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.339029 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6lk\" (UniqueName: \"kubernetes.io/projected/f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2-kube-api-access-vb6lk\") pod \"auto-csr-approver-29555408-92kzm\" (UID: \"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2\") " pod="openshift-infra/auto-csr-approver-29555408-92kzm" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.441537 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6lk\" (UniqueName: \"kubernetes.io/projected/f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2-kube-api-access-vb6lk\") pod \"auto-csr-approver-29555408-92kzm\" (UID: \"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2\") " pod="openshift-infra/auto-csr-approver-29555408-92kzm" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.463209 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6lk\" (UniqueName: \"kubernetes.io/projected/f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2-kube-api-access-vb6lk\") pod \"auto-csr-approver-29555408-92kzm\" (UID: \"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2\") " pod="openshift-infra/auto-csr-approver-29555408-92kzm" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.488081 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555408-92kzm" Mar 12 14:08:00 crc kubenswrapper[4778]: I0312 14:08:00.987835 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555408-92kzm"] Mar 12 14:08:01 crc kubenswrapper[4778]: I0312 14:08:01.465638 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555408-92kzm" event={"ID":"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2","Type":"ContainerStarted","Data":"9319c323739957e4e4b497353bc14e4abe0ae389088d65b7d2ee4703ab69b7fe"} Mar 12 14:08:02 crc kubenswrapper[4778]: I0312 14:08:02.474593 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555408-92kzm" event={"ID":"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2","Type":"ContainerStarted","Data":"9557c198d563e8bc3c1bd0c3db7f0caaf03fda89ffb3294270c9a1e6bbdd5486"} Mar 12 14:08:03 crc kubenswrapper[4778]: I0312 14:08:03.484199 4778 generic.go:334] "Generic (PLEG): container finished" podID="f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2" containerID="9557c198d563e8bc3c1bd0c3db7f0caaf03fda89ffb3294270c9a1e6bbdd5486" exitCode=0 Mar 12 14:08:03 crc kubenswrapper[4778]: I0312 14:08:03.484302 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555408-92kzm" event={"ID":"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2","Type":"ContainerDied","Data":"9557c198d563e8bc3c1bd0c3db7f0caaf03fda89ffb3294270c9a1e6bbdd5486"} Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.163023 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555408-92kzm" Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.237012 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb6lk\" (UniqueName: \"kubernetes.io/projected/f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2-kube-api-access-vb6lk\") pod \"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2\" (UID: \"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2\") " Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.243886 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2-kube-api-access-vb6lk" (OuterVolumeSpecName: "kube-api-access-vb6lk") pod "f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2" (UID: "f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2"). InnerVolumeSpecName "kube-api-access-vb6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.339020 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb6lk\" (UniqueName: \"kubernetes.io/projected/f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2-kube-api-access-vb6lk\") on node \"crc\" DevicePath \"\"" Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.354841 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555402-xtt9v"] Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.367574 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555402-xtt9v"] Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.511199 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555408-92kzm" event={"ID":"f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2","Type":"ContainerDied","Data":"9319c323739957e4e4b497353bc14e4abe0ae389088d65b7d2ee4703ab69b7fe"} Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.511243 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9319c323739957e4e4b497353bc14e4abe0ae389088d65b7d2ee4703ab69b7fe" Mar 12 14:08:05 crc kubenswrapper[4778]: I0312 14:08:05.511252 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555408-92kzm" Mar 12 14:08:06 crc kubenswrapper[4778]: I0312 14:08:06.268218 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be917952-7177-4ef5-9efa-7858d1a11ded" path="/var/lib/kubelet/pods/be917952-7177-4ef5-9efa-7858d1a11ded/volumes" Mar 12 14:08:20 crc kubenswrapper[4778]: I0312 14:08:20.700114 4778 scope.go:117] "RemoveContainer" containerID="b626545edbe9764de0b916e68f0836b92c6dbff05d2ae4f9ae924f063217aca7" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.549775 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hlz27"] Mar 12 14:08:58 crc kubenswrapper[4778]: E0312 14:08:58.550897 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2" containerName="oc" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.550915 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2" containerName="oc" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.551158 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2" containerName="oc" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.553103 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.587809 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlz27"] Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.702250 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-catalog-content\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.702414 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skwjz\" (UniqueName: \"kubernetes.io/projected/6874a57b-b8de-4009-972e-c2c6d0635745-kube-api-access-skwjz\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.702445 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-utilities\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.804842 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skwjz\" (UniqueName: \"kubernetes.io/projected/6874a57b-b8de-4009-972e-c2c6d0635745-kube-api-access-skwjz\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.804900 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-utilities\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.804985 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-catalog-content\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.805519 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-catalog-content\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.805592 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-utilities\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.828453 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skwjz\" (UniqueName: \"kubernetes.io/projected/6874a57b-b8de-4009-972e-c2c6d0635745-kube-api-access-skwjz\") pod \"redhat-marketplace-hlz27\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:58 crc kubenswrapper[4778]: I0312 14:08:58.899635 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:08:59 crc kubenswrapper[4778]: I0312 14:08:59.438272 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlz27"] Mar 12 14:09:00 crc kubenswrapper[4778]: I0312 14:09:00.017284 4778 generic.go:334] "Generic (PLEG): container finished" podID="6874a57b-b8de-4009-972e-c2c6d0635745" containerID="98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f" exitCode=0 Mar 12 14:09:00 crc kubenswrapper[4778]: I0312 14:09:00.017349 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlz27" event={"ID":"6874a57b-b8de-4009-972e-c2c6d0635745","Type":"ContainerDied","Data":"98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f"} Mar 12 14:09:00 crc kubenswrapper[4778]: I0312 14:09:00.017567 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlz27" event={"ID":"6874a57b-b8de-4009-972e-c2c6d0635745","Type":"ContainerStarted","Data":"3d3144763f7f3b537cab383b60e4a1735726bba4d562907ab1bda5ecb2039f40"} Mar 12 14:09:02 crc kubenswrapper[4778]: I0312 14:09:02.038173 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlz27" event={"ID":"6874a57b-b8de-4009-972e-c2c6d0635745","Type":"ContainerStarted","Data":"a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f"} Mar 12 14:09:03 crc kubenswrapper[4778]: I0312 14:09:03.048544 4778 generic.go:334] "Generic (PLEG): container finished" podID="6874a57b-b8de-4009-972e-c2c6d0635745" containerID="a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f" exitCode=0 Mar 12 14:09:03 crc kubenswrapper[4778]: I0312 14:09:03.048593 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlz27" event={"ID":"6874a57b-b8de-4009-972e-c2c6d0635745","Type":"ContainerDied","Data":"a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f"} Mar 12 14:09:04 crc kubenswrapper[4778]: I0312 14:09:04.063584 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlz27" event={"ID":"6874a57b-b8de-4009-972e-c2c6d0635745","Type":"ContainerStarted","Data":"2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0"} Mar 12 14:09:04 crc kubenswrapper[4778]: I0312 14:09:04.091006 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hlz27" podStartSLOduration=2.620586221 podStartE2EDuration="6.090987177s" podCreationTimestamp="2026-03-12 14:08:58 +0000 UTC" firstStartedPulling="2026-03-12 14:09:00.019442895 +0000 UTC m=+3558.468138291" lastFinishedPulling="2026-03-12 14:09:03.489843851 +0000 UTC m=+3561.938539247" observedRunningTime="2026-03-12 14:09:04.08474848 +0000 UTC m=+3562.533443876" watchObservedRunningTime="2026-03-12 14:09:04.090987177 +0000 UTC m=+3562.539682583" Mar 12 14:09:08 crc kubenswrapper[4778]: I0312 14:09:08.899875 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:09:08 crc kubenswrapper[4778]: I0312 14:09:08.900340 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:09:08 crc kubenswrapper[4778]: I0312 14:09:08.941701 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:09:09 crc kubenswrapper[4778]: I0312 14:09:09.144233 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:09:09 crc kubenswrapper[4778]: I0312 14:09:09.197949 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlz27"] Mar 12 14:09:11 crc kubenswrapper[4778]: I0312 14:09:11.123559 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hlz27" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" containerName="registry-server" containerID="cri-o://2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0" gracePeriod=2 Mar 12 14:09:11 crc kubenswrapper[4778]: I0312 14:09:11.868099 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:09:11 crc kubenswrapper[4778]: I0312 14:09:11.954588 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skwjz\" (UniqueName: \"kubernetes.io/projected/6874a57b-b8de-4009-972e-c2c6d0635745-kube-api-access-skwjz\") pod \"6874a57b-b8de-4009-972e-c2c6d0635745\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " Mar 12 14:09:11 crc kubenswrapper[4778]: I0312 14:09:11.954868 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-catalog-content\") pod \"6874a57b-b8de-4009-972e-c2c6d0635745\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " Mar 12 14:09:11 crc kubenswrapper[4778]: I0312 14:09:11.955006 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-utilities\") pod \"6874a57b-b8de-4009-972e-c2c6d0635745\" (UID: \"6874a57b-b8de-4009-972e-c2c6d0635745\") " Mar 12 14:09:11 crc kubenswrapper[4778]: I0312 14:09:11.956305 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-utilities" (OuterVolumeSpecName: "utilities") pod "6874a57b-b8de-4009-972e-c2c6d0635745" (UID: "6874a57b-b8de-4009-972e-c2c6d0635745"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:09:11 crc kubenswrapper[4778]: I0312 14:09:11.966694 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6874a57b-b8de-4009-972e-c2c6d0635745-kube-api-access-skwjz" (OuterVolumeSpecName: "kube-api-access-skwjz") pod "6874a57b-b8de-4009-972e-c2c6d0635745" (UID: "6874a57b-b8de-4009-972e-c2c6d0635745"). InnerVolumeSpecName "kube-api-access-skwjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:09:11 crc kubenswrapper[4778]: I0312 14:09:11.999251 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6874a57b-b8de-4009-972e-c2c6d0635745" (UID: "6874a57b-b8de-4009-972e-c2c6d0635745"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.057953 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skwjz\" (UniqueName: \"kubernetes.io/projected/6874a57b-b8de-4009-972e-c2c6d0635745-kube-api-access-skwjz\") on node \"crc\" DevicePath \"\"" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.058016 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.058027 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6874a57b-b8de-4009-972e-c2c6d0635745-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.133451 4778 generic.go:334] "Generic (PLEG): container finished" podID="6874a57b-b8de-4009-972e-c2c6d0635745" containerID="2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0" exitCode=0 Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.134272 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlz27" event={"ID":"6874a57b-b8de-4009-972e-c2c6d0635745","Type":"ContainerDied","Data":"2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0"} Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.134398 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlz27" event={"ID":"6874a57b-b8de-4009-972e-c2c6d0635745","Type":"ContainerDied","Data":"3d3144763f7f3b537cab383b60e4a1735726bba4d562907ab1bda5ecb2039f40"} Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.134476 4778 scope.go:117] "RemoveContainer" containerID="2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.134682 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlz27" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.169228 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlz27"] Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.173057 4778 scope.go:117] "RemoveContainer" containerID="a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.180830 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlz27"] Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.193107 4778 scope.go:117] "RemoveContainer" containerID="98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.237094 4778 scope.go:117] "RemoveContainer" containerID="2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0" Mar 12 14:09:12 crc kubenswrapper[4778]: E0312 14:09:12.237733 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0\": container with ID starting with 2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0 not found: ID does not exist" containerID="2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.237790 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0"} err="failed to get container status \"2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0\": rpc error: code = NotFound desc = could not find container \"2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0\": container with ID starting with 2db4cd718dbcfe21333c333b517ee99189e4e51f13b211cdd3a2c56aba5558c0 not found: ID does not exist" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.237846 4778 scope.go:117] "RemoveContainer" containerID="a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f" Mar 12 14:09:12 crc kubenswrapper[4778]: E0312 14:09:12.238133 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f\": container with ID starting with a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f not found: ID does not exist" containerID="a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.238156 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f"} err="failed to get container status \"a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f\": rpc error: code = NotFound desc = could not find container \"a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f\": container with ID starting with a32e5ae0574f6d2c567ab56645cb914f613126a04b379308b34a6d869d8e358f not found: ID does not exist" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.238169 4778 scope.go:117] "RemoveContainer" containerID="98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f" Mar 12 14:09:12 crc kubenswrapper[4778]: E0312 14:09:12.238376 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f\": container with ID starting with 98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f not found: ID does not exist" containerID="98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.238396 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f"} err="failed to get container status \"98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f\": rpc error: code = NotFound desc = could not find container \"98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f\": container with ID starting with 98d0a915d740c9246f67fafb55f554a2953e84bea7d0fe5a5f0b921473930f4f not found: ID does not exist" Mar 12 14:09:12 crc kubenswrapper[4778]: I0312 14:09:12.272467 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" path="/var/lib/kubelet/pods/6874a57b-b8de-4009-972e-c2c6d0635745/volumes" Mar 12 14:09:58 crc kubenswrapper[4778]: I0312 14:09:58.558870 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:09:58 crc kubenswrapper[4778]: I0312 14:09:58.560448 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.155311 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555410-ptqps"] Mar 12 14:10:00 crc kubenswrapper[4778]: E0312 14:10:00.156384 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" containerName="extract-content" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.156405 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" containerName="extract-content" Mar 12 14:10:00 crc kubenswrapper[4778]: E0312 14:10:00.156439 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" containerName="registry-server" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.156447 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" containerName="registry-server" Mar 12 14:10:00 crc kubenswrapper[4778]: E0312 14:10:00.156461 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" containerName="extract-utilities" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.156469 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" containerName="extract-utilities" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.156735 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="6874a57b-b8de-4009-972e-c2c6d0635745" containerName="registry-server" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.157639 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555410-ptqps" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.160983 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.162958 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.166039 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.168998 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555410-ptqps"] Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.309762 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4mtx\" (UniqueName: \"kubernetes.io/projected/3df6b9f3-72ae-4beb-b65c-c078aaf998ad-kube-api-access-s4mtx\") pod \"auto-csr-approver-29555410-ptqps\" (UID: \"3df6b9f3-72ae-4beb-b65c-c078aaf998ad\") " pod="openshift-infra/auto-csr-approver-29555410-ptqps" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.411936 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4mtx\" (UniqueName: \"kubernetes.io/projected/3df6b9f3-72ae-4beb-b65c-c078aaf998ad-kube-api-access-s4mtx\") pod \"auto-csr-approver-29555410-ptqps\" (UID: \"3df6b9f3-72ae-4beb-b65c-c078aaf998ad\") " pod="openshift-infra/auto-csr-approver-29555410-ptqps" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.437780 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4mtx\" (UniqueName: \"kubernetes.io/projected/3df6b9f3-72ae-4beb-b65c-c078aaf998ad-kube-api-access-s4mtx\") pod \"auto-csr-approver-29555410-ptqps\" (UID: \"3df6b9f3-72ae-4beb-b65c-c078aaf998ad\") " pod="openshift-infra/auto-csr-approver-29555410-ptqps" Mar 12 14:10:00 crc kubenswrapper[4778]: I0312 14:10:00.489525 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555410-ptqps" Mar 12 14:10:01 crc kubenswrapper[4778]: I0312 14:10:01.062996 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555410-ptqps"] Mar 12 14:10:01 crc kubenswrapper[4778]: I0312 14:10:01.187252 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555410-ptqps" event={"ID":"3df6b9f3-72ae-4beb-b65c-c078aaf998ad","Type":"ContainerStarted","Data":"30d6d0cde4fda952998df4697f22326d19f59d317ce023eebafb8133e5601af9"} Mar 12 14:10:03 crc kubenswrapper[4778]: I0312 14:10:03.206581 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555410-ptqps" event={"ID":"3df6b9f3-72ae-4beb-b65c-c078aaf998ad","Type":"ContainerStarted","Data":"fb40818927dd505564e4826e8d1f4316a9f1923eeaad7f19cc587698b0ad8339"} Mar 12 14:10:03 crc kubenswrapper[4778]: I0312 14:10:03.226052 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555410-ptqps" podStartSLOduration=1.574153421 podStartE2EDuration="3.226033959s" podCreationTimestamp="2026-03-12 14:10:00 +0000 UTC" firstStartedPulling="2026-03-12 14:10:01.065236263 +0000 UTC m=+3619.513931659" lastFinishedPulling="2026-03-12 14:10:02.717116801 +0000 UTC m=+3621.165812197" observedRunningTime="2026-03-12 14:10:03.219750561 +0000 UTC m=+3621.668445957" watchObservedRunningTime="2026-03-12 14:10:03.226033959 +0000 UTC m=+3621.674729355" Mar 12 14:10:04 crc kubenswrapper[4778]: I0312 14:10:04.217557 4778 generic.go:334] "Generic (PLEG): container finished" podID="3df6b9f3-72ae-4beb-b65c-c078aaf998ad" containerID="fb40818927dd505564e4826e8d1f4316a9f1923eeaad7f19cc587698b0ad8339" exitCode=0 Mar 12 14:10:04 crc kubenswrapper[4778]: I0312 14:10:04.217603 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555410-ptqps" event={"ID":"3df6b9f3-72ae-4beb-b65c-c078aaf998ad","Type":"ContainerDied","Data":"fb40818927dd505564e4826e8d1f4316a9f1923eeaad7f19cc587698b0ad8339"} Mar 12 14:10:05 crc kubenswrapper[4778]: I0312 14:10:05.837710 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555410-ptqps" Mar 12 14:10:05 crc kubenswrapper[4778]: I0312 14:10:05.940397 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4mtx\" (UniqueName: \"kubernetes.io/projected/3df6b9f3-72ae-4beb-b65c-c078aaf998ad-kube-api-access-s4mtx\") pod \"3df6b9f3-72ae-4beb-b65c-c078aaf998ad\" (UID: \"3df6b9f3-72ae-4beb-b65c-c078aaf998ad\") " Mar 12 14:10:05 crc kubenswrapper[4778]: I0312 14:10:05.948023 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df6b9f3-72ae-4beb-b65c-c078aaf998ad-kube-api-access-s4mtx" (OuterVolumeSpecName: "kube-api-access-s4mtx") pod "3df6b9f3-72ae-4beb-b65c-c078aaf998ad" (UID: "3df6b9f3-72ae-4beb-b65c-c078aaf998ad"). InnerVolumeSpecName "kube-api-access-s4mtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:10:06 crc kubenswrapper[4778]: I0312 14:10:06.043467 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4mtx\" (UniqueName: \"kubernetes.io/projected/3df6b9f3-72ae-4beb-b65c-c078aaf998ad-kube-api-access-s4mtx\") on node \"crc\" DevicePath \"\"" Mar 12 14:10:06 crc kubenswrapper[4778]: I0312 14:10:06.237088 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555410-ptqps" event={"ID":"3df6b9f3-72ae-4beb-b65c-c078aaf998ad","Type":"ContainerDied","Data":"30d6d0cde4fda952998df4697f22326d19f59d317ce023eebafb8133e5601af9"} Mar 12 14:10:06 crc kubenswrapper[4778]: I0312 14:10:06.237140 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30d6d0cde4fda952998df4697f22326d19f59d317ce023eebafb8133e5601af9" Mar 12 14:10:06 crc kubenswrapper[4778]: I0312 14:10:06.237227 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555410-ptqps" Mar 12 14:10:06 crc kubenswrapper[4778]: I0312 14:10:06.368744 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555404-jwm56"] Mar 12 14:10:06 crc kubenswrapper[4778]: I0312 14:10:06.378511 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555404-jwm56"] Mar 12 14:10:08 crc kubenswrapper[4778]: I0312 14:10:08.267710 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67604e51-359f-4c7f-b7df-a4f215a87085" path="/var/lib/kubelet/pods/67604e51-359f-4c7f-b7df-a4f215a87085/volumes" Mar 12 14:10:20 crc kubenswrapper[4778]: I0312 14:10:20.823284 4778 scope.go:117] "RemoveContainer" containerID="67c3ac2c335344f6b2ef2e71132a310b8eda046527619858c13389f0ce08da63" Mar 12 14:10:28 crc kubenswrapper[4778]: I0312 14:10:28.558355 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:10:28 crc kubenswrapper[4778]: I0312 14:10:28.559779 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:10:58 crc kubenswrapper[4778]: I0312 14:10:58.557907 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:10:58 crc kubenswrapper[4778]: I0312 14:10:58.559552 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:10:58 crc kubenswrapper[4778]: I0312 14:10:58.559680 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:10:58 crc kubenswrapper[4778]: I0312 14:10:58.560552 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:10:58 crc kubenswrapper[4778]: I0312 14:10:58.560701 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" gracePeriod=600 Mar 12 14:10:58 crc kubenswrapper[4778]: I0312 14:10:58.689512 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" exitCode=0 Mar 12 14:10:58 crc kubenswrapper[4778]: I0312 14:10:58.689601 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2"} Mar 12 14:10:58 crc kubenswrapper[4778]: I0312 14:10:58.689915 4778 scope.go:117] "RemoveContainer" containerID="84eb4f64f5e57ea7581e624359f9a06ffee621fbf6407e2f32f007351966b81b" Mar 12 14:10:58 crc kubenswrapper[4778]: E0312 14:10:58.701214 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:10:59 crc kubenswrapper[4778]: I0312 14:10:59.703618 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:10:59 crc kubenswrapper[4778]: E0312 14:10:59.704410 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:11:12 crc kubenswrapper[4778]: I0312 14:11:12.261564 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:11:12 crc kubenswrapper[4778]: E0312 14:11:12.262442 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:11:27 crc kubenswrapper[4778]: I0312 14:11:27.253824 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:11:27 crc kubenswrapper[4778]: E0312 14:11:27.254684 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:11:42 crc kubenswrapper[4778]: I0312 14:11:42.259656 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:11:42 crc kubenswrapper[4778]: E0312 14:11:42.261649 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:11:54 crc kubenswrapper[4778]: I0312 14:11:54.254654 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:11:54 crc kubenswrapper[4778]: E0312 14:11:54.255257 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.151501 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555412-nvskv"] Mar 12 14:12:00 crc kubenswrapper[4778]: E0312 14:12:00.152403 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df6b9f3-72ae-4beb-b65c-c078aaf998ad" containerName="oc" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.152560 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df6b9f3-72ae-4beb-b65c-c078aaf998ad" containerName="oc" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.152851 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df6b9f3-72ae-4beb-b65c-c078aaf998ad" containerName="oc" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.153543 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555412-nvskv" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.161235 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555412-nvskv"] Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.197891 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.197966 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.198268 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.251645 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgsb\" (UniqueName: \"kubernetes.io/projected/f2a4f01e-04c1-43b0-8858-2d2334a828e5-kube-api-access-fqgsb\") pod \"auto-csr-approver-29555412-nvskv\" (UID: \"f2a4f01e-04c1-43b0-8858-2d2334a828e5\") " pod="openshift-infra/auto-csr-approver-29555412-nvskv" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.354305 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqgsb\" (UniqueName: \"kubernetes.io/projected/f2a4f01e-04c1-43b0-8858-2d2334a828e5-kube-api-access-fqgsb\") pod \"auto-csr-approver-29555412-nvskv\" (UID: \"f2a4f01e-04c1-43b0-8858-2d2334a828e5\") " pod="openshift-infra/auto-csr-approver-29555412-nvskv" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.384745 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqgsb\" (UniqueName: \"kubernetes.io/projected/f2a4f01e-04c1-43b0-8858-2d2334a828e5-kube-api-access-fqgsb\") pod \"auto-csr-approver-29555412-nvskv\" (UID: \"f2a4f01e-04c1-43b0-8858-2d2334a828e5\") " pod="openshift-infra/auto-csr-approver-29555412-nvskv" Mar 12 14:12:00 crc kubenswrapper[4778]: I0312 14:12:00.529376 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555412-nvskv" Mar 12 14:12:01 crc kubenswrapper[4778]: I0312 14:12:01.041115 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555412-nvskv"] Mar 12 14:12:01 crc kubenswrapper[4778]: I0312 14:12:01.946142 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555412-nvskv" event={"ID":"f2a4f01e-04c1-43b0-8858-2d2334a828e5","Type":"ContainerStarted","Data":"f1b77ae6764356f91826bfb42ac02b4ab68a819c21ed01bf516b19b118d59157"} Mar 12 14:12:02 crc kubenswrapper[4778]: I0312 14:12:02.955171 4778 generic.go:334] "Generic (PLEG): container finished" podID="f2a4f01e-04c1-43b0-8858-2d2334a828e5" containerID="a0fce8e55d131bb482515dc65d16783265a86aa260db0e75ebb8541d77da26bd" exitCode=0 Mar 12 14:12:02 crc kubenswrapper[4778]: I0312 14:12:02.955340 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555412-nvskv" event={"ID":"f2a4f01e-04c1-43b0-8858-2d2334a828e5","Type":"ContainerDied","Data":"a0fce8e55d131bb482515dc65d16783265a86aa260db0e75ebb8541d77da26bd"} Mar 12 14:12:04 crc kubenswrapper[4778]: I0312 14:12:04.482545 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555412-nvskv" Mar 12 14:12:04 crc kubenswrapper[4778]: I0312 14:12:04.636656 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqgsb\" (UniqueName: \"kubernetes.io/projected/f2a4f01e-04c1-43b0-8858-2d2334a828e5-kube-api-access-fqgsb\") pod \"f2a4f01e-04c1-43b0-8858-2d2334a828e5\" (UID: \"f2a4f01e-04c1-43b0-8858-2d2334a828e5\") " Mar 12 14:12:04 crc kubenswrapper[4778]: I0312 14:12:04.643974 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a4f01e-04c1-43b0-8858-2d2334a828e5-kube-api-access-fqgsb" (OuterVolumeSpecName: "kube-api-access-fqgsb") pod "f2a4f01e-04c1-43b0-8858-2d2334a828e5" (UID: "f2a4f01e-04c1-43b0-8858-2d2334a828e5"). InnerVolumeSpecName "kube-api-access-fqgsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:12:04 crc kubenswrapper[4778]: I0312 14:12:04.739708 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqgsb\" (UniqueName: \"kubernetes.io/projected/f2a4f01e-04c1-43b0-8858-2d2334a828e5-kube-api-access-fqgsb\") on node \"crc\" DevicePath \"\"" Mar 12 14:12:04 crc kubenswrapper[4778]: I0312 14:12:04.974305 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555412-nvskv" event={"ID":"f2a4f01e-04c1-43b0-8858-2d2334a828e5","Type":"ContainerDied","Data":"f1b77ae6764356f91826bfb42ac02b4ab68a819c21ed01bf516b19b118d59157"} Mar 12 14:12:04 crc kubenswrapper[4778]: I0312 14:12:04.974345 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b77ae6764356f91826bfb42ac02b4ab68a819c21ed01bf516b19b118d59157" Mar 12 14:12:04 crc kubenswrapper[4778]: I0312 14:12:04.974353 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555412-nvskv" Mar 12 14:12:05 crc kubenswrapper[4778]: I0312 14:12:05.555050 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555406-44v2c"] Mar 12 14:12:05 crc kubenswrapper[4778]: I0312 14:12:05.567167 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555406-44v2c"] Mar 12 14:12:06 crc kubenswrapper[4778]: I0312 14:12:06.263873 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e7c143c-4173-450a-afa1-587a3927f2d4" path="/var/lib/kubelet/pods/2e7c143c-4173-450a-afa1-587a3927f2d4/volumes" Mar 12 14:12:07 crc kubenswrapper[4778]: I0312 14:12:07.254158 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:12:07 crc kubenswrapper[4778]: E0312 14:12:07.255027 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:12:20 crc kubenswrapper[4778]: I0312 14:12:20.924355 4778 scope.go:117] "RemoveContainer" containerID="2c9bf5717fd9b2c8602b788cdc193d4c283ef18c6a74310ea29b1e044df19e27" Mar 12 14:12:22 crc kubenswrapper[4778]: I0312 14:12:22.260262 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:12:22 crc kubenswrapper[4778]: E0312 14:12:22.260904 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:12:36 crc kubenswrapper[4778]: I0312 14:12:36.254047 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:12:36 crc kubenswrapper[4778]: E0312 14:12:36.254834 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:12:48 crc kubenswrapper[4778]: I0312 14:12:48.255728 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:12:48 crc kubenswrapper[4778]: E0312 14:12:48.256550 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:13:01 crc kubenswrapper[4778]: I0312 14:13:01.253650 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:13:01 crc kubenswrapper[4778]: E0312 14:13:01.254709 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:13:15 crc kubenswrapper[4778]: I0312 14:13:15.254700 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:13:15 crc kubenswrapper[4778]: E0312 14:13:15.255465 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:13:29 crc kubenswrapper[4778]: I0312 14:13:29.321964 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:13:29 crc kubenswrapper[4778]: E0312 14:13:29.322855 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:13:40 crc kubenswrapper[4778]: I0312 14:13:40.254382 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:13:40 crc kubenswrapper[4778]: E0312 14:13:40.255113 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:13:55 crc kubenswrapper[4778]: I0312 14:13:55.254679 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:13:55 crc kubenswrapper[4778]: E0312 14:13:55.255558 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.142210 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555414-sk27k"] Mar 12 14:14:00 crc kubenswrapper[4778]: E0312 14:14:00.144339 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a4f01e-04c1-43b0-8858-2d2334a828e5" containerName="oc" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.144453 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a4f01e-04c1-43b0-8858-2d2334a828e5" containerName="oc" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.144772 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a4f01e-04c1-43b0-8858-2d2334a828e5" containerName="oc" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.145576 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555414-sk27k" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.147736 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.147736 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.148727 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.166849 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555414-sk27k"] Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.199151 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfcb9\" (UniqueName: \"kubernetes.io/projected/066c6c88-5aea-4678-88a0-ec5c556ee008-kube-api-access-rfcb9\") pod \"auto-csr-approver-29555414-sk27k\" (UID: \"066c6c88-5aea-4678-88a0-ec5c556ee008\") " pod="openshift-infra/auto-csr-approver-29555414-sk27k" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.302609 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfcb9\" (UniqueName: \"kubernetes.io/projected/066c6c88-5aea-4678-88a0-ec5c556ee008-kube-api-access-rfcb9\") pod \"auto-csr-approver-29555414-sk27k\" (UID: \"066c6c88-5aea-4678-88a0-ec5c556ee008\") " pod="openshift-infra/auto-csr-approver-29555414-sk27k" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.325174 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfcb9\" (UniqueName: \"kubernetes.io/projected/066c6c88-5aea-4678-88a0-ec5c556ee008-kube-api-access-rfcb9\") pod \"auto-csr-approver-29555414-sk27k\" (UID: \"066c6c88-5aea-4678-88a0-ec5c556ee008\") " pod="openshift-infra/auto-csr-approver-29555414-sk27k" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.499301 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555414-sk27k" Mar 12 14:14:00 crc kubenswrapper[4778]: I0312 14:14:00.994414 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555414-sk27k"] Mar 12 14:14:01 crc kubenswrapper[4778]: I0312 14:14:01.009578 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:14:01 crc kubenswrapper[4778]: I0312 14:14:01.977846 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555414-sk27k" event={"ID":"066c6c88-5aea-4678-88a0-ec5c556ee008","Type":"ContainerStarted","Data":"25dbd3dcdc293d45446a6218fa50871970a46f1bd6b9c3d4c1547536ec32ea1c"} Mar 12 14:14:02 crc kubenswrapper[4778]: I0312 14:14:02.998535 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555414-sk27k" event={"ID":"066c6c88-5aea-4678-88a0-ec5c556ee008","Type":"ContainerStarted","Data":"3925769f3c54add574a18597a06eea490ae5d1cab077561f5ae8b471c0db5519"} Mar 12 14:14:04 crc kubenswrapper[4778]: I0312 14:14:04.014786 4778 generic.go:334] "Generic (PLEG): container finished" podID="066c6c88-5aea-4678-88a0-ec5c556ee008" containerID="3925769f3c54add574a18597a06eea490ae5d1cab077561f5ae8b471c0db5519" exitCode=0 Mar 12 14:14:04 crc kubenswrapper[4778]: I0312 14:14:04.014993 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555414-sk27k" event={"ID":"066c6c88-5aea-4678-88a0-ec5c556ee008","Type":"ContainerDied","Data":"3925769f3c54add574a18597a06eea490ae5d1cab077561f5ae8b471c0db5519"} Mar 12 14:14:05 crc kubenswrapper[4778]: I0312 14:14:05.574264 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555414-sk27k" Mar 12 14:14:05 crc kubenswrapper[4778]: I0312 14:14:05.707548 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfcb9\" (UniqueName: \"kubernetes.io/projected/066c6c88-5aea-4678-88a0-ec5c556ee008-kube-api-access-rfcb9\") pod \"066c6c88-5aea-4678-88a0-ec5c556ee008\" (UID: \"066c6c88-5aea-4678-88a0-ec5c556ee008\") " Mar 12 14:14:05 crc kubenswrapper[4778]: I0312 14:14:05.717808 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066c6c88-5aea-4678-88a0-ec5c556ee008-kube-api-access-rfcb9" (OuterVolumeSpecName: "kube-api-access-rfcb9") pod "066c6c88-5aea-4678-88a0-ec5c556ee008" (UID: "066c6c88-5aea-4678-88a0-ec5c556ee008"). InnerVolumeSpecName "kube-api-access-rfcb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:14:05 crc kubenswrapper[4778]: I0312 14:14:05.810478 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfcb9\" (UniqueName: \"kubernetes.io/projected/066c6c88-5aea-4678-88a0-ec5c556ee008-kube-api-access-rfcb9\") on node \"crc\" DevicePath \"\"" Mar 12 14:14:06 crc kubenswrapper[4778]: I0312 14:14:06.033431 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555414-sk27k" event={"ID":"066c6c88-5aea-4678-88a0-ec5c556ee008","Type":"ContainerDied","Data":"25dbd3dcdc293d45446a6218fa50871970a46f1bd6b9c3d4c1547536ec32ea1c"} Mar 12 14:14:06 crc kubenswrapper[4778]: I0312 14:14:06.033711 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25dbd3dcdc293d45446a6218fa50871970a46f1bd6b9c3d4c1547536ec32ea1c" Mar 12 14:14:06 crc kubenswrapper[4778]: I0312 14:14:06.033485 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555414-sk27k" Mar 12 14:14:06 crc kubenswrapper[4778]: I0312 14:14:06.650093 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555408-92kzm"] Mar 12 14:14:06 crc kubenswrapper[4778]: I0312 14:14:06.661524 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555408-92kzm"] Mar 12 14:14:07 crc kubenswrapper[4778]: I0312 14:14:07.253679 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:14:07 crc kubenswrapper[4778]: E0312 14:14:07.254014 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:14:08 crc kubenswrapper[4778]: I0312 14:14:08.268359 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2" path="/var/lib/kubelet/pods/f7d6c255-3117-4dbe-b3d6-23f7be9f1cf2/volumes" Mar 12 14:14:21 crc kubenswrapper[4778]: I0312 14:14:21.018527 4778 scope.go:117] "RemoveContainer" containerID="9557c198d563e8bc3c1bd0c3db7f0caaf03fda89ffb3294270c9a1e6bbdd5486" Mar 12 14:14:21 crc kubenswrapper[4778]: I0312 14:14:21.253865 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:14:21 crc kubenswrapper[4778]: E0312 14:14:21.254395 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:14:36 crc kubenswrapper[4778]: I0312 14:14:36.254143 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:14:36 crc kubenswrapper[4778]: E0312 14:14:36.255083 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:14:51 crc kubenswrapper[4778]: I0312 14:14:51.253439 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:14:51 crc kubenswrapper[4778]: E0312 14:14:51.253855 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.153156 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r"] Mar 12 14:15:00 crc kubenswrapper[4778]: E0312 14:15:00.154327 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066c6c88-5aea-4678-88a0-ec5c556ee008" containerName="oc" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.154343 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="066c6c88-5aea-4678-88a0-ec5c556ee008" containerName="oc" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.154622 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="066c6c88-5aea-4678-88a0-ec5c556ee008" containerName="oc" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.155478 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.158615 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.160655 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.161682 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r"] Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.324676 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-config-volume\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.325105 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-secret-volume\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.325153 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7cx\" (UniqueName: \"kubernetes.io/projected/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-kube-api-access-8d7cx\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.427091 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-secret-volume\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.427176 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d7cx\" (UniqueName: \"kubernetes.io/projected/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-kube-api-access-8d7cx\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.427265 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-config-volume\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.428362 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-config-volume\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.433130 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-secret-volume\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.447068 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d7cx\" (UniqueName: \"kubernetes.io/projected/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-kube-api-access-8d7cx\") pod \"collect-profiles-29555415-jjk6r\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.484317 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:00 crc kubenswrapper[4778]: I0312 14:15:00.980805 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r"] Mar 12 14:15:01 crc kubenswrapper[4778]: I0312 14:15:01.540988 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" event={"ID":"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc","Type":"ContainerStarted","Data":"e47d44b34f9f52eb0c1249aedb361a64e96dcc50294b7036054124a9fc860b25"} Mar 12 14:15:01 crc kubenswrapper[4778]: I0312 14:15:01.541235 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" event={"ID":"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc","Type":"ContainerStarted","Data":"e2b3b34963b216b0a4e58389ce9a9953591924a46c7f59a6c519e692e6e3e738"} Mar 12 14:15:01 crc kubenswrapper[4778]: I0312 14:15:01.565569 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" podStartSLOduration=1.565544987 podStartE2EDuration="1.565544987s" podCreationTimestamp="2026-03-12 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 14:15:01.563697905 +0000 UTC m=+3920.012393291" watchObservedRunningTime="2026-03-12 14:15:01.565544987 +0000 UTC m=+3920.014240393" Mar 12 14:15:02 crc kubenswrapper[4778]: I0312 14:15:02.549671 4778 generic.go:334] "Generic (PLEG): container finished" podID="9c6027ea-d1ed-4df0-bbe7-6904d2722fbc" containerID="e47d44b34f9f52eb0c1249aedb361a64e96dcc50294b7036054124a9fc860b25" exitCode=0 Mar 12 14:15:02 crc kubenswrapper[4778]: I0312 14:15:02.549714 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" event={"ID":"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc","Type":"ContainerDied","Data":"e47d44b34f9f52eb0c1249aedb361a64e96dcc50294b7036054124a9fc860b25"} Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.125859 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.203243 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-config-volume\") pod \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.203315 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d7cx\" (UniqueName: \"kubernetes.io/projected/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-kube-api-access-8d7cx\") pod \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.203672 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c6027ea-d1ed-4df0-bbe7-6904d2722fbc" (UID: "9c6027ea-d1ed-4df0-bbe7-6904d2722fbc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.204285 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-secret-volume\") pod \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\" (UID: \"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc\") " Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.205047 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.208317 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-kube-api-access-8d7cx" (OuterVolumeSpecName: "kube-api-access-8d7cx") pod "9c6027ea-d1ed-4df0-bbe7-6904d2722fbc" (UID: "9c6027ea-d1ed-4df0-bbe7-6904d2722fbc"). InnerVolumeSpecName "kube-api-access-8d7cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.208664 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c6027ea-d1ed-4df0-bbe7-6904d2722fbc" (UID: "9c6027ea-d1ed-4df0-bbe7-6904d2722fbc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.259655 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:15:04 crc kubenswrapper[4778]: E0312 14:15:04.259865 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.306906 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.306949 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d7cx\" (UniqueName: \"kubernetes.io/projected/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc-kube-api-access-8d7cx\") on node \"crc\" DevicePath \"\"" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.575457 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" event={"ID":"9c6027ea-d1ed-4df0-bbe7-6904d2722fbc","Type":"ContainerDied","Data":"e2b3b34963b216b0a4e58389ce9a9953591924a46c7f59a6c519e692e6e3e738"} Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.576087 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2b3b34963b216b0a4e58389ce9a9953591924a46c7f59a6c519e692e6e3e738" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.576261 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r" Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.642296 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f"] Mar 12 14:15:04 crc kubenswrapper[4778]: I0312 14:15:04.663456 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555370-zcp5f"] Mar 12 14:15:06 crc kubenswrapper[4778]: I0312 14:15:06.266819 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf03685-d980-41f0-bbc5-84b9ae0ce1df" path="/var/lib/kubelet/pods/8bf03685-d980-41f0-bbc5-84b9ae0ce1df/volumes" Mar 12 14:15:15 crc kubenswrapper[4778]: I0312 14:15:15.254359 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:15:15 crc kubenswrapper[4778]: E0312 14:15:15.255240 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:15:21 crc kubenswrapper[4778]: I0312 14:15:21.090068 4778 scope.go:117] "RemoveContainer" containerID="fa067a709ad1af5d5b9327929891ffc04839dd2d8aba3cc70c48dbfeabd353b9" Mar 12 14:15:28 crc kubenswrapper[4778]: I0312 14:15:28.258535 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:15:28 crc kubenswrapper[4778]: E0312 14:15:28.259436 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.144413 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x7hrm"] Mar 12 14:15:40 crc kubenswrapper[4778]: E0312 14:15:40.145502 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6027ea-d1ed-4df0-bbe7-6904d2722fbc" containerName="collect-profiles" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.145517 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6027ea-d1ed-4df0-bbe7-6904d2722fbc" containerName="collect-profiles" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.145766 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6027ea-d1ed-4df0-bbe7-6904d2722fbc" containerName="collect-profiles" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.147461 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.158407 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x7hrm"] Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.265637 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94cm\" (UniqueName: \"kubernetes.io/projected/cc5d1f36-7c1d-4e41-8b22-e332bc157137-kube-api-access-b94cm\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.265696 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-utilities\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.265857 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-catalog-content\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.367930 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-catalog-content\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.368054 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94cm\" (UniqueName: \"kubernetes.io/projected/cc5d1f36-7c1d-4e41-8b22-e332bc157137-kube-api-access-b94cm\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.368086 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-utilities\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.368563 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-utilities\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.369125 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-catalog-content\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.387743 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94cm\" (UniqueName: \"kubernetes.io/projected/cc5d1f36-7c1d-4e41-8b22-e332bc157137-kube-api-access-b94cm\") pod \"certified-operators-x7hrm\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:40 crc kubenswrapper[4778]: I0312 14:15:40.470194 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:41 crc kubenswrapper[4778]: I0312 14:15:41.004038 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x7hrm"] Mar 12 14:15:41 crc kubenswrapper[4778]: I0312 14:15:41.944734 4778 generic.go:334] "Generic (PLEG): container finished" podID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerID="64f499cf2cc148115dc0a26243c38d7001d6394769d13ddee3ac1cad8976c318" exitCode=0 Mar 12 14:15:41 crc kubenswrapper[4778]: I0312 14:15:41.945061 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7hrm" event={"ID":"cc5d1f36-7c1d-4e41-8b22-e332bc157137","Type":"ContainerDied","Data":"64f499cf2cc148115dc0a26243c38d7001d6394769d13ddee3ac1cad8976c318"} Mar 12 14:15:41 crc kubenswrapper[4778]: I0312 14:15:41.945095 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7hrm" event={"ID":"cc5d1f36-7c1d-4e41-8b22-e332bc157137","Type":"ContainerStarted","Data":"26c7df7cd30e2c7b7c46b23a72c15208ba43c942293bb8165b5463087a90fd16"} Mar 12 14:15:43 crc kubenswrapper[4778]: I0312 14:15:43.254510 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:15:43 crc kubenswrapper[4778]: E0312 14:15:43.255331 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:15:43 crc kubenswrapper[4778]: I0312 14:15:43.963127 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7hrm" event={"ID":"cc5d1f36-7c1d-4e41-8b22-e332bc157137","Type":"ContainerStarted","Data":"1c4ad35649138a441fab3a7d2a2c15667596a5cea9441d67b18d6bf6786027d9"} Mar 12 14:15:45 crc kubenswrapper[4778]: I0312 14:15:45.987942 4778 generic.go:334] "Generic (PLEG): container finished" podID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerID="1c4ad35649138a441fab3a7d2a2c15667596a5cea9441d67b18d6bf6786027d9" exitCode=0 Mar 12 14:15:45 crc kubenswrapper[4778]: I0312 14:15:45.988029 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7hrm" event={"ID":"cc5d1f36-7c1d-4e41-8b22-e332bc157137","Type":"ContainerDied","Data":"1c4ad35649138a441fab3a7d2a2c15667596a5cea9441d67b18d6bf6786027d9"} Mar 12 14:15:47 crc kubenswrapper[4778]: I0312 14:15:47.003014 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7hrm" event={"ID":"cc5d1f36-7c1d-4e41-8b22-e332bc157137","Type":"ContainerStarted","Data":"6ccc0779d6618322241261d3f4156d21061888679b769f8c1765e9abe88eab3e"} Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.471343 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.471872 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.526223 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.550265 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x7hrm" podStartSLOduration=5.831476572 podStartE2EDuration="10.550250597s" podCreationTimestamp="2026-03-12 14:15:40 +0000 UTC" firstStartedPulling="2026-03-12 14:15:41.947305323 +0000 UTC m=+3960.396000709" lastFinishedPulling="2026-03-12 14:15:46.666079328 +0000 UTC m=+3965.114774734" observedRunningTime="2026-03-12 14:15:47.032154091 +0000 UTC m=+3965.480849487" watchObservedRunningTime="2026-03-12 14:15:50.550250597 +0000 UTC m=+3968.998945993" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.773359 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nhcwl"] Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.775637 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.794308 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nhcwl"] Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.869544 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-catalog-content\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.869717 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrvq2\" (UniqueName: \"kubernetes.io/projected/7971c2f5-a365-405f-9acf-0ef296dcedcf-kube-api-access-hrvq2\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.869781 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-utilities\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.971456 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrvq2\" (UniqueName: \"kubernetes.io/projected/7971c2f5-a365-405f-9acf-0ef296dcedcf-kube-api-access-hrvq2\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.971546 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-utilities\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.971683 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-catalog-content\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.972142 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-utilities\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.972423 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-catalog-content\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:50 crc kubenswrapper[4778]: I0312 14:15:50.998300 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrvq2\" (UniqueName: \"kubernetes.io/projected/7971c2f5-a365-405f-9acf-0ef296dcedcf-kube-api-access-hrvq2\") pod \"community-operators-nhcwl\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:51 crc kubenswrapper[4778]: I0312 14:15:51.091440 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:51 crc kubenswrapper[4778]: I0312 14:15:51.094693 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:15:51 crc kubenswrapper[4778]: I0312 14:15:51.656129 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nhcwl"] Mar 12 14:15:52 crc kubenswrapper[4778]: I0312 14:15:52.052115 4778 generic.go:334] "Generic (PLEG): container finished" podID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerID="4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494" exitCode=0 Mar 12 14:15:52 crc kubenswrapper[4778]: I0312 14:15:52.052285 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhcwl" event={"ID":"7971c2f5-a365-405f-9acf-0ef296dcedcf","Type":"ContainerDied","Data":"4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494"} Mar 12 14:15:52 crc kubenswrapper[4778]: I0312 14:15:52.052530 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhcwl" event={"ID":"7971c2f5-a365-405f-9acf-0ef296dcedcf","Type":"ContainerStarted","Data":"332bc25c0b47d707315b376f0305bf18f3b8ef2e6fc97b50c01fdda78dbf3fe1"} Mar 12 14:15:53 crc kubenswrapper[4778]: I0312 14:15:53.360826 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x7hrm"] Mar 12 14:15:53 crc kubenswrapper[4778]: I0312 14:15:53.361270 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x7hrm" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerName="registry-server" containerID="cri-o://6ccc0779d6618322241261d3f4156d21061888679b769f8c1765e9abe88eab3e" gracePeriod=2 Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.075140 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhcwl" event={"ID":"7971c2f5-a365-405f-9acf-0ef296dcedcf","Type":"ContainerStarted","Data":"96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1"} Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.078436 4778 generic.go:334] "Generic (PLEG): container finished" podID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerID="6ccc0779d6618322241261d3f4156d21061888679b769f8c1765e9abe88eab3e" exitCode=0 Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.078607 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7hrm" event={"ID":"cc5d1f36-7c1d-4e41-8b22-e332bc157137","Type":"ContainerDied","Data":"6ccc0779d6618322241261d3f4156d21061888679b769f8c1765e9abe88eab3e"} Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.253507 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:15:54 crc kubenswrapper[4778]: E0312 14:15:54.253751 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.277312 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.445969 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b94cm\" (UniqueName: \"kubernetes.io/projected/cc5d1f36-7c1d-4e41-8b22-e332bc157137-kube-api-access-b94cm\") pod \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.446041 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-utilities\") pod \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.446095 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-catalog-content\") pod \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\" (UID: \"cc5d1f36-7c1d-4e41-8b22-e332bc157137\") " Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.447667 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-utilities" (OuterVolumeSpecName: "utilities") pod "cc5d1f36-7c1d-4e41-8b22-e332bc157137" (UID: "cc5d1f36-7c1d-4e41-8b22-e332bc157137"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.455668 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc5d1f36-7c1d-4e41-8b22-e332bc157137-kube-api-access-b94cm" (OuterVolumeSpecName: "kube-api-access-b94cm") pod "cc5d1f36-7c1d-4e41-8b22-e332bc157137" (UID: "cc5d1f36-7c1d-4e41-8b22-e332bc157137"). InnerVolumeSpecName "kube-api-access-b94cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.519482 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc5d1f36-7c1d-4e41-8b22-e332bc157137" (UID: "cc5d1f36-7c1d-4e41-8b22-e332bc157137"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.548111 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b94cm\" (UniqueName: \"kubernetes.io/projected/cc5d1f36-7c1d-4e41-8b22-e332bc157137-kube-api-access-b94cm\") on node \"crc\" DevicePath \"\"" Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.548147 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:15:54 crc kubenswrapper[4778]: I0312 14:15:54.548157 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc5d1f36-7c1d-4e41-8b22-e332bc157137-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.090888 4778 generic.go:334] "Generic (PLEG): container finished" podID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerID="96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1" exitCode=0 Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.091360 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhcwl" event={"ID":"7971c2f5-a365-405f-9acf-0ef296dcedcf","Type":"ContainerDied","Data":"96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1"} Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.094204 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x7hrm" event={"ID":"cc5d1f36-7c1d-4e41-8b22-e332bc157137","Type":"ContainerDied","Data":"26c7df7cd30e2c7b7c46b23a72c15208ba43c942293bb8165b5463087a90fd16"} Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.094251 4778 scope.go:117] "RemoveContainer" containerID="6ccc0779d6618322241261d3f4156d21061888679b769f8c1765e9abe88eab3e" Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.094266 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x7hrm" Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.119856 4778 scope.go:117] "RemoveContainer" containerID="1c4ad35649138a441fab3a7d2a2c15667596a5cea9441d67b18d6bf6786027d9" Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.154931 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x7hrm"] Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.175373 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x7hrm"] Mar 12 14:15:55 crc kubenswrapper[4778]: I0312 14:15:55.175880 4778 scope.go:117] "RemoveContainer" containerID="64f499cf2cc148115dc0a26243c38d7001d6394769d13ddee3ac1cad8976c318" Mar 12 14:15:56 crc kubenswrapper[4778]: I0312 14:15:56.107940 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhcwl" event={"ID":"7971c2f5-a365-405f-9acf-0ef296dcedcf","Type":"ContainerStarted","Data":"33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93"} Mar 12 14:15:56 crc kubenswrapper[4778]: I0312 14:15:56.129585 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nhcwl" podStartSLOduration=2.666888212 podStartE2EDuration="6.129565784s" podCreationTimestamp="2026-03-12 14:15:50 +0000 UTC" firstStartedPulling="2026-03-12 14:15:52.054738887 +0000 UTC m=+3970.503434283" lastFinishedPulling="2026-03-12 14:15:55.517416469 +0000 UTC m=+3973.966111855" observedRunningTime="2026-03-12 14:15:56.126173358 +0000 UTC m=+3974.574868764" watchObservedRunningTime="2026-03-12 14:15:56.129565784 +0000 UTC m=+3974.578261200" Mar 12 14:15:56 crc kubenswrapper[4778]: I0312 14:15:56.265396 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" path="/var/lib/kubelet/pods/cc5d1f36-7c1d-4e41-8b22-e332bc157137/volumes" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.161498 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555416-qx4gr"] Mar 12 14:16:00 crc kubenswrapper[4778]: E0312 14:16:00.162659 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerName="extract-utilities" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.162679 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerName="extract-utilities" Mar 12 14:16:00 crc kubenswrapper[4778]: E0312 14:16:00.162712 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerName="registry-server" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.162722 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerName="registry-server" Mar 12 14:16:00 crc kubenswrapper[4778]: E0312 14:16:00.162739 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerName="extract-content" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.162748 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerName="extract-content" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.162994 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc5d1f36-7c1d-4e41-8b22-e332bc157137" containerName="registry-server" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.163848 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555416-qx4gr" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.166544 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.167251 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.167263 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.172734 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555416-qx4gr"] Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.261463 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg594\" (UniqueName: \"kubernetes.io/projected/43680ad6-62d2-4e00-a38b-e87d712af9a7-kube-api-access-wg594\") pod \"auto-csr-approver-29555416-qx4gr\" (UID: \"43680ad6-62d2-4e00-a38b-e87d712af9a7\") " pod="openshift-infra/auto-csr-approver-29555416-qx4gr" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.362726 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg594\" (UniqueName: \"kubernetes.io/projected/43680ad6-62d2-4e00-a38b-e87d712af9a7-kube-api-access-wg594\") pod \"auto-csr-approver-29555416-qx4gr\" (UID: \"43680ad6-62d2-4e00-a38b-e87d712af9a7\") " pod="openshift-infra/auto-csr-approver-29555416-qx4gr" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.385425 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg594\" (UniqueName: \"kubernetes.io/projected/43680ad6-62d2-4e00-a38b-e87d712af9a7-kube-api-access-wg594\") pod \"auto-csr-approver-29555416-qx4gr\" (UID: \"43680ad6-62d2-4e00-a38b-e87d712af9a7\") " pod="openshift-infra/auto-csr-approver-29555416-qx4gr" Mar 12 14:16:00 crc kubenswrapper[4778]: I0312 14:16:00.500873 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555416-qx4gr" Mar 12 14:16:01 crc kubenswrapper[4778]: I0312 14:16:01.002080 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555416-qx4gr"] Mar 12 14:16:01 crc kubenswrapper[4778]: I0312 14:16:01.095155 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:16:01 crc kubenswrapper[4778]: I0312 14:16:01.095237 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:16:01 crc kubenswrapper[4778]: I0312 14:16:01.146113 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:16:01 crc kubenswrapper[4778]: I0312 14:16:01.172360 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555416-qx4gr" event={"ID":"43680ad6-62d2-4e00-a38b-e87d712af9a7","Type":"ContainerStarted","Data":"6e1d6cb507d001f6a763ab8ac3b873837b5430cdf2bccfa5deae43ad5b72ed5d"} Mar 12 14:16:01 crc kubenswrapper[4778]: I0312 14:16:01.223445 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:16:01 crc kubenswrapper[4778]: I0312 14:16:01.379921 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nhcwl"] Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.191251 4778 generic.go:334] "Generic (PLEG): container finished" podID="43680ad6-62d2-4e00-a38b-e87d712af9a7" containerID="95522e18d302b349263bfd01c0f317ec6a698231318f520fcd1ed51d7aa504cc" exitCode=0 Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.191669 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nhcwl" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerName="registry-server" containerID="cri-o://33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93" gracePeriod=2 Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.191995 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555416-qx4gr" event={"ID":"43680ad6-62d2-4e00-a38b-e87d712af9a7","Type":"ContainerDied","Data":"95522e18d302b349263bfd01c0f317ec6a698231318f520fcd1ed51d7aa504cc"} Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.833882 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.944385 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-utilities\") pod \"7971c2f5-a365-405f-9acf-0ef296dcedcf\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.944578 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrvq2\" (UniqueName: \"kubernetes.io/projected/7971c2f5-a365-405f-9acf-0ef296dcedcf-kube-api-access-hrvq2\") pod \"7971c2f5-a365-405f-9acf-0ef296dcedcf\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.944688 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-catalog-content\") pod \"7971c2f5-a365-405f-9acf-0ef296dcedcf\" (UID: \"7971c2f5-a365-405f-9acf-0ef296dcedcf\") " Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.945269 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-utilities" (OuterVolumeSpecName: "utilities") pod "7971c2f5-a365-405f-9acf-0ef296dcedcf" (UID: "7971c2f5-a365-405f-9acf-0ef296dcedcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:03 crc kubenswrapper[4778]: I0312 14:16:03.952704 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7971c2f5-a365-405f-9acf-0ef296dcedcf-kube-api-access-hrvq2" (OuterVolumeSpecName: "kube-api-access-hrvq2") pod "7971c2f5-a365-405f-9acf-0ef296dcedcf" (UID: "7971c2f5-a365-405f-9acf-0ef296dcedcf"). InnerVolumeSpecName "kube-api-access-hrvq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.012564 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7971c2f5-a365-405f-9acf-0ef296dcedcf" (UID: "7971c2f5-a365-405f-9acf-0ef296dcedcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.046730 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.046772 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7971c2f5-a365-405f-9acf-0ef296dcedcf-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.046783 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrvq2\" (UniqueName: \"kubernetes.io/projected/7971c2f5-a365-405f-9acf-0ef296dcedcf-kube-api-access-hrvq2\") on node \"crc\" DevicePath \"\"" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.203441 4778 generic.go:334] "Generic (PLEG): container finished" podID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerID="33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93" exitCode=0 Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.203518 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhcwl" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.203518 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhcwl" event={"ID":"7971c2f5-a365-405f-9acf-0ef296dcedcf","Type":"ContainerDied","Data":"33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93"} Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.203920 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhcwl" event={"ID":"7971c2f5-a365-405f-9acf-0ef296dcedcf","Type":"ContainerDied","Data":"332bc25c0b47d707315b376f0305bf18f3b8ef2e6fc97b50c01fdda78dbf3fe1"} Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.203950 4778 scope.go:117] "RemoveContainer" containerID="33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.278875 4778 scope.go:117] "RemoveContainer" containerID="96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.285821 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nhcwl"] Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.285855 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nhcwl"] Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.313468 4778 scope.go:117] "RemoveContainer" containerID="4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.355274 4778 scope.go:117] "RemoveContainer" containerID="33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93" Mar 12 14:16:04 crc kubenswrapper[4778]: E0312 14:16:04.360425 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93\": container with ID starting with 33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93 not found: ID does not exist" containerID="33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.360480 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93"} err="failed to get container status \"33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93\": rpc error: code = NotFound desc = could not find container \"33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93\": container with ID starting with 33f839b589bdd36727509b18dd4d904b894fa2e2c1aae95639ee07888bcf2c93 not found: ID does not exist" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.360510 4778 scope.go:117] "RemoveContainer" containerID="96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1" Mar 12 14:16:04 crc kubenswrapper[4778]: E0312 14:16:04.361126 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1\": container with ID starting with 96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1 not found: ID does not exist" containerID="96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.361177 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1"} err="failed to get container status \"96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1\": rpc error: code = NotFound desc = could not find container \"96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1\": container with ID starting with 96aade208371a8c6296565d119296613b2c7f130c7ab3677910649b35169fdf1 not found: ID does not exist" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.361227 4778 scope.go:117] "RemoveContainer" containerID="4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494" Mar 12 14:16:04 crc kubenswrapper[4778]: E0312 14:16:04.361665 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494\": container with ID starting with 4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494 not found: ID does not exist" containerID="4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.361694 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494"} err="failed to get container status \"4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494\": rpc error: code = NotFound desc = could not find container \"4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494\": container with ID starting with 4736be3491cb2ad9c4a9ad132c1d7d595ed12aef66987357666acd2528e82494 not found: ID does not exist" Mar 12 14:16:04 crc kubenswrapper[4778]: I0312 14:16:04.866368 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555416-qx4gr" Mar 12 14:16:05 crc kubenswrapper[4778]: I0312 14:16:05.068807 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg594\" (UniqueName: \"kubernetes.io/projected/43680ad6-62d2-4e00-a38b-e87d712af9a7-kube-api-access-wg594\") pod \"43680ad6-62d2-4e00-a38b-e87d712af9a7\" (UID: \"43680ad6-62d2-4e00-a38b-e87d712af9a7\") " Mar 12 14:16:05 crc kubenswrapper[4778]: I0312 14:16:05.073842 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43680ad6-62d2-4e00-a38b-e87d712af9a7-kube-api-access-wg594" (OuterVolumeSpecName: "kube-api-access-wg594") pod "43680ad6-62d2-4e00-a38b-e87d712af9a7" (UID: "43680ad6-62d2-4e00-a38b-e87d712af9a7"). InnerVolumeSpecName "kube-api-access-wg594". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:16:05 crc kubenswrapper[4778]: I0312 14:16:05.170983 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg594\" (UniqueName: \"kubernetes.io/projected/43680ad6-62d2-4e00-a38b-e87d712af9a7-kube-api-access-wg594\") on node \"crc\" DevicePath \"\"" Mar 12 14:16:05 crc kubenswrapper[4778]: I0312 14:16:05.214361 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555416-qx4gr" event={"ID":"43680ad6-62d2-4e00-a38b-e87d712af9a7","Type":"ContainerDied","Data":"6e1d6cb507d001f6a763ab8ac3b873837b5430cdf2bccfa5deae43ad5b72ed5d"} Mar 12 14:16:05 crc kubenswrapper[4778]: I0312 14:16:05.214392 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555416-qx4gr" Mar 12 14:16:05 crc kubenswrapper[4778]: I0312 14:16:05.214414 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e1d6cb507d001f6a763ab8ac3b873837b5430cdf2bccfa5deae43ad5b72ed5d" Mar 12 14:16:05 crc kubenswrapper[4778]: I0312 14:16:05.959620 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555410-ptqps"] Mar 12 14:16:05 crc kubenswrapper[4778]: I0312 14:16:05.968902 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555410-ptqps"] Mar 12 14:16:06 crc kubenswrapper[4778]: I0312 14:16:06.265623 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df6b9f3-72ae-4beb-b65c-c078aaf998ad" path="/var/lib/kubelet/pods/3df6b9f3-72ae-4beb-b65c-c078aaf998ad/volumes" Mar 12 14:16:06 crc kubenswrapper[4778]: I0312 14:16:06.266520 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" path="/var/lib/kubelet/pods/7971c2f5-a365-405f-9acf-0ef296dcedcf/volumes" Mar 12 14:16:08 crc kubenswrapper[4778]: I0312 14:16:08.259417 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:16:09 crc kubenswrapper[4778]: I0312 14:16:09.253807 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"a6494f4559bd62f54e1e656b9c39bd8218ddcffc6f2d4766fd788af23c632a2c"} Mar 12 14:16:21 crc kubenswrapper[4778]: I0312 14:16:21.146928 4778 scope.go:117] "RemoveContainer" containerID="fb40818927dd505564e4826e8d1f4316a9f1923eeaad7f19cc587698b0ad8339" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.151838 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555418-xsl6h"] Mar 12 14:18:00 crc kubenswrapper[4778]: E0312 14:18:00.152954 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerName="registry-server" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.152973 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerName="registry-server" Mar 12 14:18:00 crc kubenswrapper[4778]: E0312 14:18:00.153012 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerName="extract-content" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.153020 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerName="extract-content" Mar 12 14:18:00 crc kubenswrapper[4778]: E0312 14:18:00.153046 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerName="extract-utilities" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.153054 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerName="extract-utilities" Mar 12 14:18:00 crc kubenswrapper[4778]: E0312 14:18:00.153067 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43680ad6-62d2-4e00-a38b-e87d712af9a7" containerName="oc" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.153074 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="43680ad6-62d2-4e00-a38b-e87d712af9a7" containerName="oc" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.153308 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7971c2f5-a365-405f-9acf-0ef296dcedcf" containerName="registry-server" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.153332 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="43680ad6-62d2-4e00-a38b-e87d712af9a7" containerName="oc" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.154175 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555418-xsl6h" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.156758 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.157019 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.157224 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.173013 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555418-xsl6h"] Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.265494 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd9r9\" (UniqueName: \"kubernetes.io/projected/f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9-kube-api-access-xd9r9\") pod \"auto-csr-approver-29555418-xsl6h\" (UID: \"f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9\") " pod="openshift-infra/auto-csr-approver-29555418-xsl6h" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.367814 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd9r9\" (UniqueName: \"kubernetes.io/projected/f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9-kube-api-access-xd9r9\") pod \"auto-csr-approver-29555418-xsl6h\" (UID: \"f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9\") " pod="openshift-infra/auto-csr-approver-29555418-xsl6h" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.394114 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd9r9\" (UniqueName: \"kubernetes.io/projected/f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9-kube-api-access-xd9r9\") pod \"auto-csr-approver-29555418-xsl6h\" (UID: \"f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9\") " pod="openshift-infra/auto-csr-approver-29555418-xsl6h" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.496409 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555418-xsl6h" Mar 12 14:18:00 crc kubenswrapper[4778]: I0312 14:18:00.977156 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555418-xsl6h"] Mar 12 14:18:01 crc kubenswrapper[4778]: I0312 14:18:01.378198 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555418-xsl6h" event={"ID":"f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9","Type":"ContainerStarted","Data":"689afc1dac01728ffadd390c7a2587aba6197cf23819976c42d270443708afc0"} Mar 12 14:18:03 crc kubenswrapper[4778]: I0312 14:18:03.397278 4778 generic.go:334] "Generic (PLEG): container finished" podID="f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9" containerID="b15f85572ed50fa6f5f1417355d5cdd391ae57e91aab8027f8febe9070bb5ec6" exitCode=0 Mar 12 14:18:03 crc kubenswrapper[4778]: I0312 14:18:03.397394 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555418-xsl6h" event={"ID":"f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9","Type":"ContainerDied","Data":"b15f85572ed50fa6f5f1417355d5cdd391ae57e91aab8027f8febe9070bb5ec6"} Mar 12 14:18:05 crc kubenswrapper[4778]: I0312 14:18:05.077928 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555418-xsl6h" Mar 12 14:18:05 crc kubenswrapper[4778]: I0312 14:18:05.165874 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd9r9\" (UniqueName: \"kubernetes.io/projected/f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9-kube-api-access-xd9r9\") pod \"f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9\" (UID: \"f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9\") " Mar 12 14:18:05 crc kubenswrapper[4778]: I0312 14:18:05.172796 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9-kube-api-access-xd9r9" (OuterVolumeSpecName: "kube-api-access-xd9r9") pod "f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9" (UID: "f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9"). InnerVolumeSpecName "kube-api-access-xd9r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:18:05 crc kubenswrapper[4778]: I0312 14:18:05.268944 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd9r9\" (UniqueName: \"kubernetes.io/projected/f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9-kube-api-access-xd9r9\") on node \"crc\" DevicePath \"\"" Mar 12 14:18:05 crc kubenswrapper[4778]: I0312 14:18:05.417378 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555418-xsl6h" event={"ID":"f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9","Type":"ContainerDied","Data":"689afc1dac01728ffadd390c7a2587aba6197cf23819976c42d270443708afc0"} Mar 12 14:18:05 crc kubenswrapper[4778]: I0312 14:18:05.417426 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689afc1dac01728ffadd390c7a2587aba6197cf23819976c42d270443708afc0" Mar 12 14:18:05 crc kubenswrapper[4778]: I0312 14:18:05.417465 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555418-xsl6h" Mar 12 14:18:06 crc kubenswrapper[4778]: I0312 14:18:06.150460 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555412-nvskv"] Mar 12 14:18:06 crc kubenswrapper[4778]: I0312 14:18:06.160256 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555412-nvskv"] Mar 12 14:18:06 crc kubenswrapper[4778]: I0312 14:18:06.266442 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a4f01e-04c1-43b0-8858-2d2334a828e5" path="/var/lib/kubelet/pods/f2a4f01e-04c1-43b0-8858-2d2334a828e5/volumes" Mar 12 14:18:21 crc kubenswrapper[4778]: I0312 14:18:21.273129 4778 scope.go:117] "RemoveContainer" containerID="a0fce8e55d131bb482515dc65d16783265a86aa260db0e75ebb8541d77da26bd" Mar 12 14:18:28 crc kubenswrapper[4778]: I0312 14:18:28.557678 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:18:28 crc kubenswrapper[4778]: I0312 14:18:28.558304 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.279673 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c6zkg"] Mar 12 14:18:58 crc kubenswrapper[4778]: E0312 14:18:58.280871 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9" containerName="oc" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.280888 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9" containerName="oc" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.281136 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9" containerName="oc" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.282788 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.310446 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6zkg"] Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.369175 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-catalog-content\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.369317 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-utilities\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.369435 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46fk\" (UniqueName: \"kubernetes.io/projected/dfbf7e11-e585-48c4-a038-1a642b34bf20-kube-api-access-w46fk\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.470965 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w46fk\" (UniqueName: \"kubernetes.io/projected/dfbf7e11-e585-48c4-a038-1a642b34bf20-kube-api-access-w46fk\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.471038 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-catalog-content\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.471118 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-utilities\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.471644 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-catalog-content\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.471700 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-utilities\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.496553 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w46fk\" (UniqueName: \"kubernetes.io/projected/dfbf7e11-e585-48c4-a038-1a642b34bf20-kube-api-access-w46fk\") pod \"redhat-marketplace-c6zkg\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.557492 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.557810 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:18:58 crc kubenswrapper[4778]: I0312 14:18:58.606811 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:18:59 crc kubenswrapper[4778]: I0312 14:18:59.073059 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6zkg"] Mar 12 14:18:59 crc kubenswrapper[4778]: I0312 14:18:59.881648 4778 generic.go:334] "Generic (PLEG): container finished" podID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerID="e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85" exitCode=0 Mar 12 14:18:59 crc kubenswrapper[4778]: I0312 14:18:59.881821 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6zkg" event={"ID":"dfbf7e11-e585-48c4-a038-1a642b34bf20","Type":"ContainerDied","Data":"e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85"} Mar 12 14:18:59 crc kubenswrapper[4778]: I0312 14:18:59.881931 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6zkg" event={"ID":"dfbf7e11-e585-48c4-a038-1a642b34bf20","Type":"ContainerStarted","Data":"5f30e337fb736bed955aa279f581a9d18d0fa917fb0681de958f108990939dea"} Mar 12 14:19:01 crc kubenswrapper[4778]: E0312 14:19:01.547175 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbf7e11_e585_48c4_a038_1a642b34bf20.slice/crio-conmon-b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbf7e11_e585_48c4_a038_1a642b34bf20.slice/crio-b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:19:01 crc kubenswrapper[4778]: I0312 14:19:01.902317 4778 generic.go:334] "Generic (PLEG): container finished" podID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerID="b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1" exitCode=0 Mar 12 14:19:01 crc kubenswrapper[4778]: I0312 14:19:01.902392 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6zkg" event={"ID":"dfbf7e11-e585-48c4-a038-1a642b34bf20","Type":"ContainerDied","Data":"b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1"} Mar 12 14:19:01 crc kubenswrapper[4778]: I0312 14:19:01.904831 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:19:02 crc kubenswrapper[4778]: I0312 14:19:02.914813 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6zkg" event={"ID":"dfbf7e11-e585-48c4-a038-1a642b34bf20","Type":"ContainerStarted","Data":"27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9"} Mar 12 14:19:08 crc kubenswrapper[4778]: I0312 14:19:08.607735 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:19:08 crc kubenswrapper[4778]: I0312 14:19:08.608030 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:19:08 crc kubenswrapper[4778]: I0312 14:19:08.762576 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:19:08 crc kubenswrapper[4778]: I0312 14:19:08.788368 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c6zkg" podStartSLOduration=8.282948396 podStartE2EDuration="10.788351047s" podCreationTimestamp="2026-03-12 14:18:58 +0000 UTC" firstStartedPulling="2026-03-12 14:18:59.884095161 +0000 UTC m=+4158.332790557" lastFinishedPulling="2026-03-12 14:19:02.389497812 +0000 UTC m=+4160.838193208" observedRunningTime="2026-03-12 14:19:02.946936592 +0000 UTC m=+4161.395631988" watchObservedRunningTime="2026-03-12 14:19:08.788351047 +0000 UTC m=+4167.237046443" Mar 12 14:19:09 crc kubenswrapper[4778]: I0312 14:19:09.020794 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:19:09 crc kubenswrapper[4778]: I0312 14:19:09.085549 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6zkg"] Mar 12 14:19:10 crc kubenswrapper[4778]: I0312 14:19:10.987137 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c6zkg" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerName="registry-server" containerID="cri-o://27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9" gracePeriod=2 Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.634136 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.763789 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w46fk\" (UniqueName: \"kubernetes.io/projected/dfbf7e11-e585-48c4-a038-1a642b34bf20-kube-api-access-w46fk\") pod \"dfbf7e11-e585-48c4-a038-1a642b34bf20\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.764292 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-catalog-content\") pod \"dfbf7e11-e585-48c4-a038-1a642b34bf20\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.764417 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-utilities\") pod \"dfbf7e11-e585-48c4-a038-1a642b34bf20\" (UID: \"dfbf7e11-e585-48c4-a038-1a642b34bf20\") " Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.765934 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-utilities" (OuterVolumeSpecName: "utilities") pod "dfbf7e11-e585-48c4-a038-1a642b34bf20" (UID: "dfbf7e11-e585-48c4-a038-1a642b34bf20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.777273 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfbf7e11-e585-48c4-a038-1a642b34bf20-kube-api-access-w46fk" (OuterVolumeSpecName: "kube-api-access-w46fk") pod "dfbf7e11-e585-48c4-a038-1a642b34bf20" (UID: "dfbf7e11-e585-48c4-a038-1a642b34bf20"). InnerVolumeSpecName "kube-api-access-w46fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.867547 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w46fk\" (UniqueName: \"kubernetes.io/projected/dfbf7e11-e585-48c4-a038-1a642b34bf20-kube-api-access-w46fk\") on node \"crc\" DevicePath \"\"" Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.867818 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.899166 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfbf7e11-e585-48c4-a038-1a642b34bf20" (UID: "dfbf7e11-e585-48c4-a038-1a642b34bf20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:19:11 crc kubenswrapper[4778]: I0312 14:19:11.969492 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbf7e11-e585-48c4-a038-1a642b34bf20-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.000907 4778 generic.go:334] "Generic (PLEG): container finished" podID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerID="27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9" exitCode=0 Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.000985 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6zkg" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.001012 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6zkg" event={"ID":"dfbf7e11-e585-48c4-a038-1a642b34bf20","Type":"ContainerDied","Data":"27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9"} Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.002205 4778 scope.go:117] "RemoveContainer" containerID="27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.002133 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6zkg" event={"ID":"dfbf7e11-e585-48c4-a038-1a642b34bf20","Type":"ContainerDied","Data":"5f30e337fb736bed955aa279f581a9d18d0fa917fb0681de958f108990939dea"} Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.024150 4778 scope.go:117] "RemoveContainer" containerID="b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.046739 4778 scope.go:117] "RemoveContainer" containerID="e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.052995 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6zkg"] Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.062876 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6zkg"] Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.098036 4778 scope.go:117] "RemoveContainer" containerID="27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9" Mar 12 14:19:12 crc kubenswrapper[4778]: E0312 14:19:12.098626 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9\": container with ID starting with 27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9 not found: ID does not exist" containerID="27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.098665 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9"} err="failed to get container status \"27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9\": rpc error: code = NotFound desc = could not find container \"27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9\": container with ID starting with 27a67aa78ed15bca7e6416fb7234b4c96dc7d3121f30ecdc6273108b57e425c9 not found: ID does not exist" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.098687 4778 scope.go:117] "RemoveContainer" containerID="b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1" Mar 12 14:19:12 crc kubenswrapper[4778]: E0312 14:19:12.099655 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1\": container with ID starting with b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1 not found: ID does not exist" containerID="b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.099865 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1"} err="failed to get container status \"b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1\": rpc error: code = NotFound desc = could not find container \"b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1\": container with ID starting with b857541cbcebd77863643bfc0e6b81bac07a2dd8aa0275412e4a79c4af5595d1 not found: ID does not exist" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.099974 4778 scope.go:117] "RemoveContainer" containerID="e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85" Mar 12 14:19:12 crc kubenswrapper[4778]: E0312 14:19:12.100773 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85\": container with ID starting with e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85 not found: ID does not exist" containerID="e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.100816 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85"} err="failed to get container status \"e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85\": rpc error: code = NotFound desc = could not find container \"e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85\": container with ID starting with e6a04774d5c089a6caac52b6fd1cd02fef3c72791f4c34122389109951297e85 not found: ID does not exist" Mar 12 14:19:12 crc kubenswrapper[4778]: I0312 14:19:12.264707 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" path="/var/lib/kubelet/pods/dfbf7e11-e585-48c4-a038-1a642b34bf20/volumes" Mar 12 14:19:28 crc kubenswrapper[4778]: I0312 14:19:28.557513 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:19:28 crc kubenswrapper[4778]: I0312 14:19:28.558116 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:19:28 crc kubenswrapper[4778]: I0312 14:19:28.558197 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:19:28 crc kubenswrapper[4778]: I0312 14:19:28.559018 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6494f4559bd62f54e1e656b9c39bd8218ddcffc6f2d4766fd788af23c632a2c"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:19:28 crc kubenswrapper[4778]: I0312 14:19:28.559078 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://a6494f4559bd62f54e1e656b9c39bd8218ddcffc6f2d4766fd788af23c632a2c" gracePeriod=600 Mar 12 14:19:29 crc kubenswrapper[4778]: I0312 14:19:29.159649 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="a6494f4559bd62f54e1e656b9c39bd8218ddcffc6f2d4766fd788af23c632a2c" exitCode=0 Mar 12 14:19:29 crc kubenswrapper[4778]: I0312 14:19:29.159957 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"a6494f4559bd62f54e1e656b9c39bd8218ddcffc6f2d4766fd788af23c632a2c"} Mar 12 14:19:29 crc kubenswrapper[4778]: I0312 14:19:29.159986 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a"} Mar 12 14:19:29 crc kubenswrapper[4778]: I0312 14:19:29.160001 4778 scope.go:117] "RemoveContainer" containerID="1f141018aeb4c8c1d3d00926888126781b45815cde38ed496c177b71b2ba7fd2" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.152003 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555420-vqx98"] Mar 12 14:20:00 crc kubenswrapper[4778]: E0312 14:20:00.153216 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerName="extract-utilities" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.153235 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerName="extract-utilities" Mar 12 14:20:00 crc kubenswrapper[4778]: E0312 14:20:00.153260 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerName="extract-content" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.153268 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerName="extract-content" Mar 12 14:20:00 crc kubenswrapper[4778]: E0312 14:20:00.153279 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerName="registry-server" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.153287 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerName="registry-server" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.153548 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfbf7e11-e585-48c4-a038-1a642b34bf20" containerName="registry-server" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.154434 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555420-vqx98" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.160682 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.161261 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.161402 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.161451 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555420-vqx98"] Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.209999 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hd6z\" (UniqueName: \"kubernetes.io/projected/3d3b2fac-f000-4f5a-b253-e54ae85d507f-kube-api-access-6hd6z\") pod \"auto-csr-approver-29555420-vqx98\" (UID: \"3d3b2fac-f000-4f5a-b253-e54ae85d507f\") " pod="openshift-infra/auto-csr-approver-29555420-vqx98" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.312008 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hd6z\" (UniqueName: \"kubernetes.io/projected/3d3b2fac-f000-4f5a-b253-e54ae85d507f-kube-api-access-6hd6z\") pod \"auto-csr-approver-29555420-vqx98\" (UID: \"3d3b2fac-f000-4f5a-b253-e54ae85d507f\") " pod="openshift-infra/auto-csr-approver-29555420-vqx98" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.332012 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hd6z\" (UniqueName: \"kubernetes.io/projected/3d3b2fac-f000-4f5a-b253-e54ae85d507f-kube-api-access-6hd6z\") pod \"auto-csr-approver-29555420-vqx98\" (UID: \"3d3b2fac-f000-4f5a-b253-e54ae85d507f\") " pod="openshift-infra/auto-csr-approver-29555420-vqx98" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.475535 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555420-vqx98" Mar 12 14:20:00 crc kubenswrapper[4778]: I0312 14:20:00.906898 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555420-vqx98"] Mar 12 14:20:01 crc kubenswrapper[4778]: I0312 14:20:01.460571 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555420-vqx98" event={"ID":"3d3b2fac-f000-4f5a-b253-e54ae85d507f","Type":"ContainerStarted","Data":"5ebb0322b4e511455b511c13308ff5a03d744e0670cff67ace653f8b35850e2c"} Mar 12 14:20:03 crc kubenswrapper[4778]: I0312 14:20:03.480476 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555420-vqx98" event={"ID":"3d3b2fac-f000-4f5a-b253-e54ae85d507f","Type":"ContainerStarted","Data":"45d8e22a4c6b9a2b198c09597b6bc6f24b127ce1a5abca778cee677c28671528"} Mar 12 14:20:03 crc kubenswrapper[4778]: I0312 14:20:03.504618 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555420-vqx98" podStartSLOduration=2.254114855 podStartE2EDuration="3.504594157s" podCreationTimestamp="2026-03-12 14:20:00 +0000 UTC" firstStartedPulling="2026-03-12 14:20:01.234960116 +0000 UTC m=+4219.683655522" lastFinishedPulling="2026-03-12 14:20:02.485439428 +0000 UTC m=+4220.934134824" observedRunningTime="2026-03-12 14:20:03.49695438 +0000 UTC m=+4221.945649776" watchObservedRunningTime="2026-03-12 14:20:03.504594157 +0000 UTC m=+4221.953289553" Mar 12 14:20:04 crc kubenswrapper[4778]: I0312 14:20:04.492517 4778 generic.go:334] "Generic (PLEG): container finished" podID="3d3b2fac-f000-4f5a-b253-e54ae85d507f" containerID="45d8e22a4c6b9a2b198c09597b6bc6f24b127ce1a5abca778cee677c28671528" exitCode=0 Mar 12 14:20:04 crc kubenswrapper[4778]: I0312 14:20:04.492619 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555420-vqx98" event={"ID":"3d3b2fac-f000-4f5a-b253-e54ae85d507f","Type":"ContainerDied","Data":"45d8e22a4c6b9a2b198c09597b6bc6f24b127ce1a5abca778cee677c28671528"} Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.102927 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555420-vqx98" Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.133163 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hd6z\" (UniqueName: \"kubernetes.io/projected/3d3b2fac-f000-4f5a-b253-e54ae85d507f-kube-api-access-6hd6z\") pod \"3d3b2fac-f000-4f5a-b253-e54ae85d507f\" (UID: \"3d3b2fac-f000-4f5a-b253-e54ae85d507f\") " Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.148370 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3b2fac-f000-4f5a-b253-e54ae85d507f-kube-api-access-6hd6z" (OuterVolumeSpecName: "kube-api-access-6hd6z") pod "3d3b2fac-f000-4f5a-b253-e54ae85d507f" (UID: "3d3b2fac-f000-4f5a-b253-e54ae85d507f"). InnerVolumeSpecName "kube-api-access-6hd6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.235430 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hd6z\" (UniqueName: \"kubernetes.io/projected/3d3b2fac-f000-4f5a-b253-e54ae85d507f-kube-api-access-6hd6z\") on node \"crc\" DevicePath \"\"" Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.514291 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555420-vqx98" event={"ID":"3d3b2fac-f000-4f5a-b253-e54ae85d507f","Type":"ContainerDied","Data":"5ebb0322b4e511455b511c13308ff5a03d744e0670cff67ace653f8b35850e2c"} Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.514333 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ebb0322b4e511455b511c13308ff5a03d744e0670cff67ace653f8b35850e2c" Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.514410 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555420-vqx98" Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.585985 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555414-sk27k"] Mar 12 14:20:06 crc kubenswrapper[4778]: I0312 14:20:06.595621 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555414-sk27k"] Mar 12 14:20:08 crc kubenswrapper[4778]: I0312 14:20:08.265618 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066c6c88-5aea-4678-88a0-ec5c556ee008" path="/var/lib/kubelet/pods/066c6c88-5aea-4678-88a0-ec5c556ee008/volumes" Mar 12 14:20:21 crc kubenswrapper[4778]: I0312 14:20:21.377532 4778 scope.go:117] "RemoveContainer" containerID="3925769f3c54add574a18597a06eea490ae5d1cab077561f5ae8b471c0db5519" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.786058 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wgfdw"] Mar 12 14:21:15 crc kubenswrapper[4778]: E0312 14:21:15.786955 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3b2fac-f000-4f5a-b253-e54ae85d507f" containerName="oc" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.786968 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3b2fac-f000-4f5a-b253-e54ae85d507f" containerName="oc" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.787292 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3b2fac-f000-4f5a-b253-e54ae85d507f" containerName="oc" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.788672 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.800414 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wgfdw"] Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.869140 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-catalog-content\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.869221 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlsbx\" (UniqueName: \"kubernetes.io/projected/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-kube-api-access-rlsbx\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.869462 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-utilities\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.971685 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-catalog-content\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.971747 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlsbx\" (UniqueName: \"kubernetes.io/projected/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-kube-api-access-rlsbx\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.971903 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-utilities\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.972348 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-catalog-content\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.972380 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-utilities\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:15 crc kubenswrapper[4778]: I0312 14:21:15.999703 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlsbx\" (UniqueName: \"kubernetes.io/projected/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-kube-api-access-rlsbx\") pod \"redhat-operators-wgfdw\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:16 crc kubenswrapper[4778]: I0312 14:21:16.109476 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:16 crc kubenswrapper[4778]: I0312 14:21:16.663339 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wgfdw"] Mar 12 14:21:17 crc kubenswrapper[4778]: I0312 14:21:17.125925 4778 generic.go:334] "Generic (PLEG): container finished" podID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerID="e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93" exitCode=0 Mar 12 14:21:17 crc kubenswrapper[4778]: I0312 14:21:17.125983 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgfdw" event={"ID":"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe","Type":"ContainerDied","Data":"e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93"} Mar 12 14:21:17 crc kubenswrapper[4778]: I0312 14:21:17.126229 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgfdw" event={"ID":"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe","Type":"ContainerStarted","Data":"089a716699a43fe932443bd8a22b6bddce9e43f3949e83593a8c791039aa04a6"} Mar 12 14:21:18 crc kubenswrapper[4778]: I0312 14:21:18.138575 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgfdw" event={"ID":"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe","Type":"ContainerStarted","Data":"58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d"} Mar 12 14:21:23 crc kubenswrapper[4778]: I0312 14:21:23.183883 4778 generic.go:334] "Generic (PLEG): container finished" podID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerID="58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d" exitCode=0 Mar 12 14:21:23 crc kubenswrapper[4778]: I0312 14:21:23.184084 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgfdw" event={"ID":"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe","Type":"ContainerDied","Data":"58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d"} Mar 12 14:21:24 crc kubenswrapper[4778]: I0312 14:21:24.194438 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgfdw" event={"ID":"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe","Type":"ContainerStarted","Data":"db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3"} Mar 12 14:21:24 crc kubenswrapper[4778]: I0312 14:21:24.222558 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wgfdw" podStartSLOduration=2.516719762 podStartE2EDuration="9.222536099s" podCreationTimestamp="2026-03-12 14:21:15 +0000 UTC" firstStartedPulling="2026-03-12 14:21:17.127568465 +0000 UTC m=+4295.576263861" lastFinishedPulling="2026-03-12 14:21:23.833384802 +0000 UTC m=+4302.282080198" observedRunningTime="2026-03-12 14:21:24.215679284 +0000 UTC m=+4302.664374700" watchObservedRunningTime="2026-03-12 14:21:24.222536099 +0000 UTC m=+4302.671231495" Mar 12 14:21:26 crc kubenswrapper[4778]: I0312 14:21:26.110343 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:26 crc kubenswrapper[4778]: I0312 14:21:26.110694 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:27 crc kubenswrapper[4778]: I0312 14:21:27.160804 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wgfdw" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="registry-server" probeResult="failure" output=< Mar 12 14:21:27 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:21:27 crc kubenswrapper[4778]: > Mar 12 14:21:28 crc kubenswrapper[4778]: I0312 14:21:28.557373 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:21:28 crc kubenswrapper[4778]: I0312 14:21:28.557739 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:21:37 crc kubenswrapper[4778]: I0312 14:21:37.157265 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wgfdw" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="registry-server" probeResult="failure" output=< Mar 12 14:21:37 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:21:37 crc kubenswrapper[4778]: > Mar 12 14:21:47 crc kubenswrapper[4778]: I0312 14:21:47.166474 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wgfdw" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="registry-server" probeResult="failure" output=< Mar 12 14:21:47 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:21:47 crc kubenswrapper[4778]: > Mar 12 14:21:56 crc kubenswrapper[4778]: I0312 14:21:56.164256 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:56 crc kubenswrapper[4778]: I0312 14:21:56.229599 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:56 crc kubenswrapper[4778]: I0312 14:21:56.402016 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wgfdw"] Mar 12 14:21:57 crc kubenswrapper[4778]: I0312 14:21:57.503273 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wgfdw" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="registry-server" containerID="cri-o://db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3" gracePeriod=2 Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.296688 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.343872 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-catalog-content\") pod \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.343950 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlsbx\" (UniqueName: \"kubernetes.io/projected/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-kube-api-access-rlsbx\") pod \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.343975 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-utilities\") pod \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\" (UID: \"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe\") " Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.346511 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-utilities" (OuterVolumeSpecName: "utilities") pod "0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" (UID: "0bc2634e-eaa3-4b87-977d-000a3fe0ccbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.352424 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-kube-api-access-rlsbx" (OuterVolumeSpecName: "kube-api-access-rlsbx") pod "0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" (UID: "0bc2634e-eaa3-4b87-977d-000a3fe0ccbe"). InnerVolumeSpecName "kube-api-access-rlsbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.447275 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlsbx\" (UniqueName: \"kubernetes.io/projected/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-kube-api-access-rlsbx\") on node \"crc\" DevicePath \"\"" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.447318 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.502072 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" (UID: "0bc2634e-eaa3-4b87-977d-000a3fe0ccbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.515715 4778 generic.go:334] "Generic (PLEG): container finished" podID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerID="db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3" exitCode=0 Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.515769 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgfdw" event={"ID":"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe","Type":"ContainerDied","Data":"db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3"} Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.515801 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wgfdw" event={"ID":"0bc2634e-eaa3-4b87-977d-000a3fe0ccbe","Type":"ContainerDied","Data":"089a716699a43fe932443bd8a22b6bddce9e43f3949e83593a8c791039aa04a6"} Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.515819 4778 scope.go:117] "RemoveContainer" containerID="db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.515856 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wgfdw" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.537836 4778 scope.go:117] "RemoveContainer" containerID="58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.550344 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.551596 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wgfdw"] Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.557880 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.557965 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.560809 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wgfdw"] Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.576730 4778 scope.go:117] "RemoveContainer" containerID="e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.615398 4778 scope.go:117] "RemoveContainer" containerID="db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3" Mar 12 14:21:58 crc kubenswrapper[4778]: E0312 14:21:58.616053 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3\": container with ID starting with db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3 not found: ID does not exist" containerID="db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.616087 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3"} err="failed to get container status \"db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3\": rpc error: code = NotFound desc = could not find container \"db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3\": container with ID starting with db8b5f8281bbd18e975804d8003a6e06615d4d5f74acea8bc7b682044597e6e3 not found: ID does not exist" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.616108 4778 scope.go:117] "RemoveContainer" containerID="58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d" Mar 12 14:21:58 crc kubenswrapper[4778]: E0312 14:21:58.616615 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d\": container with ID starting with 58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d not found: ID does not exist" containerID="58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.616635 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d"} err="failed to get container status \"58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d\": rpc error: code = NotFound desc = could not find container \"58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d\": container with ID starting with 58bb40fbf4d275731a341f0ca74d14e00964d2cc83f015da64db3332843f4f6d not found: ID does not exist" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.616646 4778 scope.go:117] "RemoveContainer" containerID="e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93" Mar 12 14:21:58 crc kubenswrapper[4778]: E0312 14:21:58.617008 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93\": container with ID starting with e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93 not found: ID does not exist" containerID="e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93" Mar 12 14:21:58 crc kubenswrapper[4778]: I0312 14:21:58.617043 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93"} err="failed to get container status \"e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93\": rpc error: code = NotFound desc = could not find container \"e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93\": container with ID starting with e16cdbdf5f70c6a7cdf18ed234bc9607178e963e177bd90f1f9a079c2ffdca93 not found: ID does not exist" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.152727 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555422-kw6c8"] Mar 12 14:22:00 crc kubenswrapper[4778]: E0312 14:22:00.153690 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="extract-utilities" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.153710 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="extract-utilities" Mar 12 14:22:00 crc kubenswrapper[4778]: E0312 14:22:00.153742 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="extract-content" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.153750 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="extract-content" Mar 12 14:22:00 crc kubenswrapper[4778]: E0312 14:22:00.153764 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="registry-server" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.153774 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="registry-server" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.153991 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" containerName="registry-server" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.154835 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555422-kw6c8" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.160349 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.160548 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.160767 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.164021 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555422-kw6c8"] Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.266900 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc2634e-eaa3-4b87-977d-000a3fe0ccbe" path="/var/lib/kubelet/pods/0bc2634e-eaa3-4b87-977d-000a3fe0ccbe/volumes" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.302112 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7tq9\" (UniqueName: \"kubernetes.io/projected/ecab0458-0ee6-4672-bd27-4c8aae8427bb-kube-api-access-j7tq9\") pod \"auto-csr-approver-29555422-kw6c8\" (UID: \"ecab0458-0ee6-4672-bd27-4c8aae8427bb\") " pod="openshift-infra/auto-csr-approver-29555422-kw6c8" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.404395 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7tq9\" (UniqueName: \"kubernetes.io/projected/ecab0458-0ee6-4672-bd27-4c8aae8427bb-kube-api-access-j7tq9\") pod \"auto-csr-approver-29555422-kw6c8\" (UID: \"ecab0458-0ee6-4672-bd27-4c8aae8427bb\") " pod="openshift-infra/auto-csr-approver-29555422-kw6c8" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.427384 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7tq9\" (UniqueName: \"kubernetes.io/projected/ecab0458-0ee6-4672-bd27-4c8aae8427bb-kube-api-access-j7tq9\") pod \"auto-csr-approver-29555422-kw6c8\" (UID: \"ecab0458-0ee6-4672-bd27-4c8aae8427bb\") " pod="openshift-infra/auto-csr-approver-29555422-kw6c8" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.490229 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555422-kw6c8" Mar 12 14:22:00 crc kubenswrapper[4778]: I0312 14:22:00.963403 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555422-kw6c8"] Mar 12 14:22:01 crc kubenswrapper[4778]: I0312 14:22:01.548005 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555422-kw6c8" event={"ID":"ecab0458-0ee6-4672-bd27-4c8aae8427bb","Type":"ContainerStarted","Data":"eaa71bc1a9088796769c4f5fda53031c0ace72bdbec06914242d1c22e12b5576"} Mar 12 14:22:04 crc kubenswrapper[4778]: I0312 14:22:04.575234 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555422-kw6c8" event={"ID":"ecab0458-0ee6-4672-bd27-4c8aae8427bb","Type":"ContainerStarted","Data":"460cb8eb02f9333998d559fe47fe50a7beb133708302defa156052aac3033d0e"} Mar 12 14:22:05 crc kubenswrapper[4778]: I0312 14:22:05.586216 4778 generic.go:334] "Generic (PLEG): container finished" podID="ecab0458-0ee6-4672-bd27-4c8aae8427bb" containerID="460cb8eb02f9333998d559fe47fe50a7beb133708302defa156052aac3033d0e" exitCode=0 Mar 12 14:22:05 crc kubenswrapper[4778]: I0312 14:22:05.586335 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555422-kw6c8" event={"ID":"ecab0458-0ee6-4672-bd27-4c8aae8427bb","Type":"ContainerDied","Data":"460cb8eb02f9333998d559fe47fe50a7beb133708302defa156052aac3033d0e"} Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.113709 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555422-kw6c8" Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.252514 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7tq9\" (UniqueName: \"kubernetes.io/projected/ecab0458-0ee6-4672-bd27-4c8aae8427bb-kube-api-access-j7tq9\") pod \"ecab0458-0ee6-4672-bd27-4c8aae8427bb\" (UID: \"ecab0458-0ee6-4672-bd27-4c8aae8427bb\") " Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.259440 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecab0458-0ee6-4672-bd27-4c8aae8427bb-kube-api-access-j7tq9" (OuterVolumeSpecName: "kube-api-access-j7tq9") pod "ecab0458-0ee6-4672-bd27-4c8aae8427bb" (UID: "ecab0458-0ee6-4672-bd27-4c8aae8427bb"). InnerVolumeSpecName "kube-api-access-j7tq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.354886 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7tq9\" (UniqueName: \"kubernetes.io/projected/ecab0458-0ee6-4672-bd27-4c8aae8427bb-kube-api-access-j7tq9\") on node \"crc\" DevicePath \"\"" Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.608505 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555422-kw6c8" event={"ID":"ecab0458-0ee6-4672-bd27-4c8aae8427bb","Type":"ContainerDied","Data":"eaa71bc1a9088796769c4f5fda53031c0ace72bdbec06914242d1c22e12b5576"} Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.608557 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaa71bc1a9088796769c4f5fda53031c0ace72bdbec06914242d1c22e12b5576" Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.608593 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555422-kw6c8" Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.658121 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555416-qx4gr"] Mar 12 14:22:07 crc kubenswrapper[4778]: I0312 14:22:07.674480 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555416-qx4gr"] Mar 12 14:22:08 crc kubenswrapper[4778]: I0312 14:22:08.263784 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43680ad6-62d2-4e00-a38b-e87d712af9a7" path="/var/lib/kubelet/pods/43680ad6-62d2-4e00-a38b-e87d712af9a7/volumes" Mar 12 14:22:21 crc kubenswrapper[4778]: I0312 14:22:21.613970 4778 scope.go:117] "RemoveContainer" containerID="95522e18d302b349263bfd01c0f317ec6a698231318f520fcd1ed51d7aa504cc" Mar 12 14:22:28 crc kubenswrapper[4778]: I0312 14:22:28.557622 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:22:28 crc kubenswrapper[4778]: I0312 14:22:28.558254 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:22:28 crc kubenswrapper[4778]: I0312 14:22:28.558312 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:22:28 crc kubenswrapper[4778]: I0312 14:22:28.559101 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:22:28 crc kubenswrapper[4778]: I0312 14:22:28.559152 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" gracePeriod=600 Mar 12 14:22:28 crc kubenswrapper[4778]: E0312 14:22:28.679997 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:22:29 crc kubenswrapper[4778]: I0312 14:22:29.534032 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" exitCode=0 Mar 12 14:22:29 crc kubenswrapper[4778]: I0312 14:22:29.534080 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a"} Mar 12 14:22:29 crc kubenswrapper[4778]: I0312 14:22:29.534118 4778 scope.go:117] "RemoveContainer" containerID="a6494f4559bd62f54e1e656b9c39bd8218ddcffc6f2d4766fd788af23c632a2c" Mar 12 14:22:29 crc kubenswrapper[4778]: I0312 14:22:29.534880 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:22:29 crc kubenswrapper[4778]: E0312 14:22:29.535269 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:22:40 crc kubenswrapper[4778]: I0312 14:22:40.253675 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:22:40 crc kubenswrapper[4778]: E0312 14:22:40.254504 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:22:53 crc kubenswrapper[4778]: I0312 14:22:53.254428 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:22:53 crc kubenswrapper[4778]: E0312 14:22:53.255362 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:23:04 crc kubenswrapper[4778]: I0312 14:23:04.253517 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:23:04 crc kubenswrapper[4778]: E0312 14:23:04.254315 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:23:15 crc kubenswrapper[4778]: I0312 14:23:15.253986 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:23:15 crc kubenswrapper[4778]: E0312 14:23:15.254815 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:23:26 crc kubenswrapper[4778]: I0312 14:23:26.253511 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:23:26 crc kubenswrapper[4778]: E0312 14:23:26.254214 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:23:37 crc kubenswrapper[4778]: I0312 14:23:37.253794 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:23:37 crc kubenswrapper[4778]: E0312 14:23:37.254497 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:23:52 crc kubenswrapper[4778]: I0312 14:23:52.260741 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:23:52 crc kubenswrapper[4778]: E0312 14:23:52.262245 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.153033 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555424-5hp4x"] Mar 12 14:24:00 crc kubenswrapper[4778]: E0312 14:24:00.154316 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecab0458-0ee6-4672-bd27-4c8aae8427bb" containerName="oc" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.154332 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecab0458-0ee6-4672-bd27-4c8aae8427bb" containerName="oc" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.154660 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecab0458-0ee6-4672-bd27-4c8aae8427bb" containerName="oc" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.155513 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555424-5hp4x" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.161418 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.161432 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.162617 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.167221 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555424-5hp4x"] Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.261897 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msg5g\" (UniqueName: \"kubernetes.io/projected/78c94941-b604-4fc0-b7b3-0b6749bbf233-kube-api-access-msg5g\") pod \"auto-csr-approver-29555424-5hp4x\" (UID: \"78c94941-b604-4fc0-b7b3-0b6749bbf233\") " pod="openshift-infra/auto-csr-approver-29555424-5hp4x" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.363489 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msg5g\" (UniqueName: \"kubernetes.io/projected/78c94941-b604-4fc0-b7b3-0b6749bbf233-kube-api-access-msg5g\") pod \"auto-csr-approver-29555424-5hp4x\" (UID: \"78c94941-b604-4fc0-b7b3-0b6749bbf233\") " pod="openshift-infra/auto-csr-approver-29555424-5hp4x" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.385029 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msg5g\" (UniqueName: \"kubernetes.io/projected/78c94941-b604-4fc0-b7b3-0b6749bbf233-kube-api-access-msg5g\") pod \"auto-csr-approver-29555424-5hp4x\" (UID: \"78c94941-b604-4fc0-b7b3-0b6749bbf233\") " pod="openshift-infra/auto-csr-approver-29555424-5hp4x" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.479409 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555424-5hp4x" Mar 12 14:24:00 crc kubenswrapper[4778]: I0312 14:24:00.995260 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555424-5hp4x"] Mar 12 14:24:00 crc kubenswrapper[4778]: W0312 14:24:00.997586 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c94941_b604_4fc0_b7b3_0b6749bbf233.slice/crio-6b42bc5d5bf8ed9b31884ad16ead36dae0cc5b8a58690359c02f1ddd78126a2f WatchSource:0}: Error finding container 6b42bc5d5bf8ed9b31884ad16ead36dae0cc5b8a58690359c02f1ddd78126a2f: Status 404 returned error can't find the container with id 6b42bc5d5bf8ed9b31884ad16ead36dae0cc5b8a58690359c02f1ddd78126a2f Mar 12 14:24:01 crc kubenswrapper[4778]: I0312 14:24:01.046058 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555424-5hp4x" event={"ID":"78c94941-b604-4fc0-b7b3-0b6749bbf233","Type":"ContainerStarted","Data":"6b42bc5d5bf8ed9b31884ad16ead36dae0cc5b8a58690359c02f1ddd78126a2f"} Mar 12 14:24:03 crc kubenswrapper[4778]: I0312 14:24:03.068092 4778 generic.go:334] "Generic (PLEG): container finished" podID="78c94941-b604-4fc0-b7b3-0b6749bbf233" containerID="a1c2cc27e654689e4f136031bb0129f78011ff7542f974149c494096b483a2a2" exitCode=0 Mar 12 14:24:03 crc kubenswrapper[4778]: I0312 14:24:03.068240 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555424-5hp4x" event={"ID":"78c94941-b604-4fc0-b7b3-0b6749bbf233","Type":"ContainerDied","Data":"a1c2cc27e654689e4f136031bb0129f78011ff7542f974149c494096b483a2a2"} Mar 12 14:24:04 crc kubenswrapper[4778]: I0312 14:24:04.527425 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555424-5hp4x" Mar 12 14:24:04 crc kubenswrapper[4778]: I0312 14:24:04.653687 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msg5g\" (UniqueName: \"kubernetes.io/projected/78c94941-b604-4fc0-b7b3-0b6749bbf233-kube-api-access-msg5g\") pod \"78c94941-b604-4fc0-b7b3-0b6749bbf233\" (UID: \"78c94941-b604-4fc0-b7b3-0b6749bbf233\") " Mar 12 14:24:04 crc kubenswrapper[4778]: I0312 14:24:04.658887 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c94941-b604-4fc0-b7b3-0b6749bbf233-kube-api-access-msg5g" (OuterVolumeSpecName: "kube-api-access-msg5g") pod "78c94941-b604-4fc0-b7b3-0b6749bbf233" (UID: "78c94941-b604-4fc0-b7b3-0b6749bbf233"). InnerVolumeSpecName "kube-api-access-msg5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:24:04 crc kubenswrapper[4778]: I0312 14:24:04.756290 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msg5g\" (UniqueName: \"kubernetes.io/projected/78c94941-b604-4fc0-b7b3-0b6749bbf233-kube-api-access-msg5g\") on node \"crc\" DevicePath \"\"" Mar 12 14:24:05 crc kubenswrapper[4778]: I0312 14:24:05.087364 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555424-5hp4x" event={"ID":"78c94941-b604-4fc0-b7b3-0b6749bbf233","Type":"ContainerDied","Data":"6b42bc5d5bf8ed9b31884ad16ead36dae0cc5b8a58690359c02f1ddd78126a2f"} Mar 12 14:24:05 crc kubenswrapper[4778]: I0312 14:24:05.087419 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b42bc5d5bf8ed9b31884ad16ead36dae0cc5b8a58690359c02f1ddd78126a2f" Mar 12 14:24:05 crc kubenswrapper[4778]: I0312 14:24:05.087429 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555424-5hp4x" Mar 12 14:24:05 crc kubenswrapper[4778]: I0312 14:24:05.254229 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:24:05 crc kubenswrapper[4778]: E0312 14:24:05.254651 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:24:05 crc kubenswrapper[4778]: I0312 14:24:05.599666 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555418-xsl6h"] Mar 12 14:24:05 crc kubenswrapper[4778]: I0312 14:24:05.610430 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555418-xsl6h"] Mar 12 14:24:06 crc kubenswrapper[4778]: I0312 14:24:06.265678 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9" path="/var/lib/kubelet/pods/f3fd5c4e-83c5-4ff0-9cb3-665ac00ec9f9/volumes" Mar 12 14:24:16 crc kubenswrapper[4778]: I0312 14:24:16.254538 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:24:16 crc kubenswrapper[4778]: E0312 14:24:16.255237 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:24:21 crc kubenswrapper[4778]: I0312 14:24:21.715875 4778 scope.go:117] "RemoveContainer" containerID="b15f85572ed50fa6f5f1417355d5cdd391ae57e91aab8027f8febe9070bb5ec6" Mar 12 14:24:29 crc kubenswrapper[4778]: I0312 14:24:29.362437 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:24:29 crc kubenswrapper[4778]: E0312 14:24:29.363330 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:24:44 crc kubenswrapper[4778]: I0312 14:24:44.254479 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:24:44 crc kubenswrapper[4778]: E0312 14:24:44.256448 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:24:56 crc kubenswrapper[4778]: I0312 14:24:56.253605 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:24:56 crc kubenswrapper[4778]: E0312 14:24:56.254415 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:25:11 crc kubenswrapper[4778]: I0312 14:25:11.253949 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:25:11 crc kubenswrapper[4778]: E0312 14:25:11.254839 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:25:22 crc kubenswrapper[4778]: I0312 14:25:22.261595 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:25:22 crc kubenswrapper[4778]: E0312 14:25:22.262434 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:25:36 crc kubenswrapper[4778]: I0312 14:25:36.255604 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:25:36 crc kubenswrapper[4778]: E0312 14:25:36.257709 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:25:51 crc kubenswrapper[4778]: I0312 14:25:51.254072 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:25:51 crc kubenswrapper[4778]: E0312 14:25:51.254862 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.143510 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2csrm"] Mar 12 14:25:55 crc kubenswrapper[4778]: E0312 14:25:55.144552 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c94941-b604-4fc0-b7b3-0b6749bbf233" containerName="oc" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.144568 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c94941-b604-4fc0-b7b3-0b6749bbf233" containerName="oc" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.144752 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c94941-b604-4fc0-b7b3-0b6749bbf233" containerName="oc" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.146278 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.179408 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8ds\" (UniqueName: \"kubernetes.io/projected/21759d07-307f-4331-94fd-5e4720ef2b7f-kube-api-access-xc8ds\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.179529 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-utilities\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.179580 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-catalog-content\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.193260 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2csrm"] Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.281658 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-utilities\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.282063 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-catalog-content\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.282453 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc8ds\" (UniqueName: \"kubernetes.io/projected/21759d07-307f-4331-94fd-5e4720ef2b7f-kube-api-access-xc8ds\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.283398 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-utilities\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.283796 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-catalog-content\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.301143 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc8ds\" (UniqueName: \"kubernetes.io/projected/21759d07-307f-4331-94fd-5e4720ef2b7f-kube-api-access-xc8ds\") pod \"certified-operators-2csrm\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.473045 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.748131 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5k5f"] Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.750449 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.765569 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5k5f"] Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.793149 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-catalog-content\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.793305 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-utilities\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.793434 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2x6\" (UniqueName: \"kubernetes.io/projected/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-kube-api-access-kf2x6\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.895597 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2x6\" (UniqueName: \"kubernetes.io/projected/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-kube-api-access-kf2x6\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.895660 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-catalog-content\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.895778 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-utilities\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.896299 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-utilities\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.897094 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-catalog-content\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.937594 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2x6\" (UniqueName: \"kubernetes.io/projected/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-kube-api-access-kf2x6\") pod \"community-operators-d5k5f\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:55 crc kubenswrapper[4778]: I0312 14:25:55.982677 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2csrm"] Mar 12 14:25:56 crc kubenswrapper[4778]: I0312 14:25:56.088765 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:25:56 crc kubenswrapper[4778]: I0312 14:25:56.151734 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2csrm" event={"ID":"21759d07-307f-4331-94fd-5e4720ef2b7f","Type":"ContainerStarted","Data":"efd699a3abbb70926c35b02e780e646800bf810115efc6de8fa07ae4825b8be7"} Mar 12 14:25:56 crc kubenswrapper[4778]: I0312 14:25:56.682936 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5k5f"] Mar 12 14:25:57 crc kubenswrapper[4778]: I0312 14:25:57.162494 4778 generic.go:334] "Generic (PLEG): container finished" podID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerID="72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380" exitCode=0 Mar 12 14:25:57 crc kubenswrapper[4778]: I0312 14:25:57.162689 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2csrm" event={"ID":"21759d07-307f-4331-94fd-5e4720ef2b7f","Type":"ContainerDied","Data":"72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380"} Mar 12 14:25:57 crc kubenswrapper[4778]: I0312 14:25:57.164525 4778 generic.go:334] "Generic (PLEG): container finished" podID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerID="9049643c60cdd1df5b7a09b9be7298790ce9d88b86df75d5b1bc6953a508f058" exitCode=0 Mar 12 14:25:57 crc kubenswrapper[4778]: I0312 14:25:57.164585 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5k5f" event={"ID":"afc8bbf3-0297-4e5a-ba30-a8fac38c3832","Type":"ContainerDied","Data":"9049643c60cdd1df5b7a09b9be7298790ce9d88b86df75d5b1bc6953a508f058"} Mar 12 14:25:57 crc kubenswrapper[4778]: I0312 14:25:57.164612 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5k5f" event={"ID":"afc8bbf3-0297-4e5a-ba30-a8fac38c3832","Type":"ContainerStarted","Data":"32a087503db88c25852e194ce746844351fc2c42e470484a64156247f43bc6ee"} Mar 12 14:25:57 crc kubenswrapper[4778]: I0312 14:25:57.164818 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:25:58 crc kubenswrapper[4778]: I0312 14:25:58.175909 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2csrm" event={"ID":"21759d07-307f-4331-94fd-5e4720ef2b7f","Type":"ContainerStarted","Data":"46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b"} Mar 12 14:25:58 crc kubenswrapper[4778]: I0312 14:25:58.178730 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5k5f" event={"ID":"afc8bbf3-0297-4e5a-ba30-a8fac38c3832","Type":"ContainerStarted","Data":"089be0f6a7588cf95d5a557ff5fcbb10a29194745bfdb53617ac1aedce8267e2"} Mar 12 14:25:59 crc kubenswrapper[4778]: I0312 14:25:59.194977 4778 generic.go:334] "Generic (PLEG): container finished" podID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerID="46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b" exitCode=0 Mar 12 14:25:59 crc kubenswrapper[4778]: I0312 14:25:59.195036 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2csrm" event={"ID":"21759d07-307f-4331-94fd-5e4720ef2b7f","Type":"ContainerDied","Data":"46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b"} Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.143548 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555426-btwds"] Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.145591 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555426-btwds" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.148717 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.150715 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.151435 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.166046 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555426-btwds"] Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.224173 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2csrm" event={"ID":"21759d07-307f-4331-94fd-5e4720ef2b7f","Type":"ContainerStarted","Data":"e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e"} Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.228430 4778 generic.go:334] "Generic (PLEG): container finished" podID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerID="089be0f6a7588cf95d5a557ff5fcbb10a29194745bfdb53617ac1aedce8267e2" exitCode=0 Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.228477 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5k5f" event={"ID":"afc8bbf3-0297-4e5a-ba30-a8fac38c3832","Type":"ContainerDied","Data":"089be0f6a7588cf95d5a557ff5fcbb10a29194745bfdb53617ac1aedce8267e2"} Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.245126 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2csrm" podStartSLOduration=2.842717645 podStartE2EDuration="5.24510405s" podCreationTimestamp="2026-03-12 14:25:55 +0000 UTC" firstStartedPulling="2026-03-12 14:25:57.164618117 +0000 UTC m=+4575.613313513" lastFinishedPulling="2026-03-12 14:25:59.567004522 +0000 UTC m=+4578.015699918" observedRunningTime="2026-03-12 14:26:00.242430384 +0000 UTC m=+4578.691125800" watchObservedRunningTime="2026-03-12 14:26:00.24510405 +0000 UTC m=+4578.693799446" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.289372 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchwt\" (UniqueName: \"kubernetes.io/projected/fa45db55-92a4-4a16-9455-ee110dc34fa6-kube-api-access-dchwt\") pod \"auto-csr-approver-29555426-btwds\" (UID: \"fa45db55-92a4-4a16-9455-ee110dc34fa6\") " pod="openshift-infra/auto-csr-approver-29555426-btwds" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.391173 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dchwt\" (UniqueName: \"kubernetes.io/projected/fa45db55-92a4-4a16-9455-ee110dc34fa6-kube-api-access-dchwt\") pod \"auto-csr-approver-29555426-btwds\" (UID: \"fa45db55-92a4-4a16-9455-ee110dc34fa6\") " pod="openshift-infra/auto-csr-approver-29555426-btwds" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.415592 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dchwt\" (UniqueName: \"kubernetes.io/projected/fa45db55-92a4-4a16-9455-ee110dc34fa6-kube-api-access-dchwt\") pod \"auto-csr-approver-29555426-btwds\" (UID: \"fa45db55-92a4-4a16-9455-ee110dc34fa6\") " pod="openshift-infra/auto-csr-approver-29555426-btwds" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.473176 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555426-btwds" Mar 12 14:26:00 crc kubenswrapper[4778]: I0312 14:26:00.952281 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555426-btwds"] Mar 12 14:26:00 crc kubenswrapper[4778]: W0312 14:26:00.955378 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa45db55_92a4_4a16_9455_ee110dc34fa6.slice/crio-1c6c5c601fa7845c122e44d0bd6f8a7b743d280b9cd09f719273c5acdf0b1363 WatchSource:0}: Error finding container 1c6c5c601fa7845c122e44d0bd6f8a7b743d280b9cd09f719273c5acdf0b1363: Status 404 returned error can't find the container with id 1c6c5c601fa7845c122e44d0bd6f8a7b743d280b9cd09f719273c5acdf0b1363 Mar 12 14:26:01 crc kubenswrapper[4778]: I0312 14:26:01.239331 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555426-btwds" event={"ID":"fa45db55-92a4-4a16-9455-ee110dc34fa6","Type":"ContainerStarted","Data":"1c6c5c601fa7845c122e44d0bd6f8a7b743d280b9cd09f719273c5acdf0b1363"} Mar 12 14:26:02 crc kubenswrapper[4778]: I0312 14:26:02.267895 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5k5f" event={"ID":"afc8bbf3-0297-4e5a-ba30-a8fac38c3832","Type":"ContainerStarted","Data":"7996f5c983dd5f46d694007de45d6212a50151bb7f287388b92acc9fa36b446c"} Mar 12 14:26:02 crc kubenswrapper[4778]: I0312 14:26:02.311478 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5k5f" podStartSLOduration=2.771586065 podStartE2EDuration="7.311462886s" podCreationTimestamp="2026-03-12 14:25:55 +0000 UTC" firstStartedPulling="2026-03-12 14:25:57.165661277 +0000 UTC m=+4575.614356673" lastFinishedPulling="2026-03-12 14:26:01.705538078 +0000 UTC m=+4580.154233494" observedRunningTime="2026-03-12 14:26:02.30349987 +0000 UTC m=+4580.752195266" watchObservedRunningTime="2026-03-12 14:26:02.311462886 +0000 UTC m=+4580.760158282" Mar 12 14:26:03 crc kubenswrapper[4778]: I0312 14:26:03.266355 4778 generic.go:334] "Generic (PLEG): container finished" podID="fa45db55-92a4-4a16-9455-ee110dc34fa6" containerID="188dd1cb886e6788ffd8398573fda57dc92b1fe481e6f2ffdc97a0e049e9348c" exitCode=0 Mar 12 14:26:03 crc kubenswrapper[4778]: I0312 14:26:03.266462 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555426-btwds" event={"ID":"fa45db55-92a4-4a16-9455-ee110dc34fa6","Type":"ContainerDied","Data":"188dd1cb886e6788ffd8398573fda57dc92b1fe481e6f2ffdc97a0e049e9348c"} Mar 12 14:26:04 crc kubenswrapper[4778]: I0312 14:26:04.834819 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555426-btwds" Mar 12 14:26:04 crc kubenswrapper[4778]: I0312 14:26:04.896005 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dchwt\" (UniqueName: \"kubernetes.io/projected/fa45db55-92a4-4a16-9455-ee110dc34fa6-kube-api-access-dchwt\") pod \"fa45db55-92a4-4a16-9455-ee110dc34fa6\" (UID: \"fa45db55-92a4-4a16-9455-ee110dc34fa6\") " Mar 12 14:26:04 crc kubenswrapper[4778]: I0312 14:26:04.901730 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa45db55-92a4-4a16-9455-ee110dc34fa6-kube-api-access-dchwt" (OuterVolumeSpecName: "kube-api-access-dchwt") pod "fa45db55-92a4-4a16-9455-ee110dc34fa6" (UID: "fa45db55-92a4-4a16-9455-ee110dc34fa6"). InnerVolumeSpecName "kube-api-access-dchwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:26:04 crc kubenswrapper[4778]: I0312 14:26:04.998508 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dchwt\" (UniqueName: \"kubernetes.io/projected/fa45db55-92a4-4a16-9455-ee110dc34fa6-kube-api-access-dchwt\") on node \"crc\" DevicePath \"\"" Mar 12 14:26:05 crc kubenswrapper[4778]: I0312 14:26:05.284046 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555426-btwds" event={"ID":"fa45db55-92a4-4a16-9455-ee110dc34fa6","Type":"ContainerDied","Data":"1c6c5c601fa7845c122e44d0bd6f8a7b743d280b9cd09f719273c5acdf0b1363"} Mar 12 14:26:05 crc kubenswrapper[4778]: I0312 14:26:05.284086 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6c5c601fa7845c122e44d0bd6f8a7b743d280b9cd09f719273c5acdf0b1363" Mar 12 14:26:05 crc kubenswrapper[4778]: I0312 14:26:05.284107 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555426-btwds" Mar 12 14:26:05 crc kubenswrapper[4778]: I0312 14:26:05.473788 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:26:05 crc kubenswrapper[4778]: I0312 14:26:05.473859 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:26:05 crc kubenswrapper[4778]: I0312 14:26:05.521585 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:26:05 crc kubenswrapper[4778]: I0312 14:26:05.916961 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555420-vqx98"] Mar 12 14:26:05 crc kubenswrapper[4778]: I0312 14:26:05.930270 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555420-vqx98"] Mar 12 14:26:06 crc kubenswrapper[4778]: I0312 14:26:06.090923 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:26:06 crc kubenswrapper[4778]: I0312 14:26:06.090973 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:26:06 crc kubenswrapper[4778]: I0312 14:26:06.137924 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:26:06 crc kubenswrapper[4778]: I0312 14:26:06.254496 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:26:06 crc kubenswrapper[4778]: E0312 14:26:06.254966 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:26:06 crc kubenswrapper[4778]: I0312 14:26:06.267712 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3b2fac-f000-4f5a-b253-e54ae85d507f" path="/var/lib/kubelet/pods/3d3b2fac-f000-4f5a-b253-e54ae85d507f/volumes" Mar 12 14:26:06 crc kubenswrapper[4778]: I0312 14:26:06.335748 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:26:06 crc kubenswrapper[4778]: I0312 14:26:06.347799 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:26:06 crc kubenswrapper[4778]: I0312 14:26:06.925417 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5k5f"] Mar 12 14:26:08 crc kubenswrapper[4778]: I0312 14:26:08.306597 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5k5f" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerName="registry-server" containerID="cri-o://7996f5c983dd5f46d694007de45d6212a50151bb7f287388b92acc9fa36b446c" gracePeriod=2 Mar 12 14:26:08 crc kubenswrapper[4778]: I0312 14:26:08.732517 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2csrm"] Mar 12 14:26:08 crc kubenswrapper[4778]: I0312 14:26:08.732892 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2csrm" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerName="registry-server" containerID="cri-o://e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e" gracePeriod=2 Mar 12 14:26:09 crc kubenswrapper[4778]: I0312 14:26:09.831537 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:26:09 crc kubenswrapper[4778]: I0312 14:26:09.906144 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc8ds\" (UniqueName: \"kubernetes.io/projected/21759d07-307f-4331-94fd-5e4720ef2b7f-kube-api-access-xc8ds\") pod \"21759d07-307f-4331-94fd-5e4720ef2b7f\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " Mar 12 14:26:09 crc kubenswrapper[4778]: I0312 14:26:09.906286 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-catalog-content\") pod \"21759d07-307f-4331-94fd-5e4720ef2b7f\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " Mar 12 14:26:09 crc kubenswrapper[4778]: I0312 14:26:09.906323 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-utilities\") pod \"21759d07-307f-4331-94fd-5e4720ef2b7f\" (UID: \"21759d07-307f-4331-94fd-5e4720ef2b7f\") " Mar 12 14:26:09 crc kubenswrapper[4778]: I0312 14:26:09.907593 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-utilities" (OuterVolumeSpecName: "utilities") pod "21759d07-307f-4331-94fd-5e4720ef2b7f" (UID: "21759d07-307f-4331-94fd-5e4720ef2b7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:26:09 crc kubenswrapper[4778]: I0312 14:26:09.912406 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21759d07-307f-4331-94fd-5e4720ef2b7f-kube-api-access-xc8ds" (OuterVolumeSpecName: "kube-api-access-xc8ds") pod "21759d07-307f-4331-94fd-5e4720ef2b7f" (UID: "21759d07-307f-4331-94fd-5e4720ef2b7f"). InnerVolumeSpecName "kube-api-access-xc8ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:26:09 crc kubenswrapper[4778]: I0312 14:26:09.978628 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21759d07-307f-4331-94fd-5e4720ef2b7f" (UID: "21759d07-307f-4331-94fd-5e4720ef2b7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.008541 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc8ds\" (UniqueName: \"kubernetes.io/projected/21759d07-307f-4331-94fd-5e4720ef2b7f-kube-api-access-xc8ds\") on node \"crc\" DevicePath \"\"" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.008579 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.008591 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21759d07-307f-4331-94fd-5e4720ef2b7f-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.324368 4778 generic.go:334] "Generic (PLEG): container finished" podID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerID="e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e" exitCode=0 Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.324667 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2csrm" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.324568 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2csrm" event={"ID":"21759d07-307f-4331-94fd-5e4720ef2b7f","Type":"ContainerDied","Data":"e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e"} Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.324747 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2csrm" event={"ID":"21759d07-307f-4331-94fd-5e4720ef2b7f","Type":"ContainerDied","Data":"efd699a3abbb70926c35b02e780e646800bf810115efc6de8fa07ae4825b8be7"} Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.324796 4778 scope.go:117] "RemoveContainer" containerID="e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.330060 4778 generic.go:334] "Generic (PLEG): container finished" podID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerID="7996f5c983dd5f46d694007de45d6212a50151bb7f287388b92acc9fa36b446c" exitCode=0 Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.330112 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5k5f" event={"ID":"afc8bbf3-0297-4e5a-ba30-a8fac38c3832","Type":"ContainerDied","Data":"7996f5c983dd5f46d694007de45d6212a50151bb7f287388b92acc9fa36b446c"} Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.543270 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.557335 4778 scope.go:117] "RemoveContainer" containerID="46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.563791 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2csrm"] Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.571059 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2csrm"] Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.591084 4778 scope.go:117] "RemoveContainer" containerID="72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.620410 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf2x6\" (UniqueName: \"kubernetes.io/projected/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-kube-api-access-kf2x6\") pod \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.620959 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-catalog-content\") pod \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.621030 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-utilities\") pod \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\" (UID: \"afc8bbf3-0297-4e5a-ba30-a8fac38c3832\") " Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.621795 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-utilities" (OuterVolumeSpecName: "utilities") pod "afc8bbf3-0297-4e5a-ba30-a8fac38c3832" (UID: "afc8bbf3-0297-4e5a-ba30-a8fac38c3832"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.626106 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-kube-api-access-kf2x6" (OuterVolumeSpecName: "kube-api-access-kf2x6") pod "afc8bbf3-0297-4e5a-ba30-a8fac38c3832" (UID: "afc8bbf3-0297-4e5a-ba30-a8fac38c3832"). InnerVolumeSpecName "kube-api-access-kf2x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.644275 4778 scope.go:117] "RemoveContainer" containerID="e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e" Mar 12 14:26:10 crc kubenswrapper[4778]: E0312 14:26:10.644708 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e\": container with ID starting with e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e not found: ID does not exist" containerID="e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.644794 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e"} err="failed to get container status \"e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e\": rpc error: code = NotFound desc = could not find container \"e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e\": container with ID starting with e2fa1773eb830885c4e15af7463b9539f69e5b812f2d2bd39e3dbf38bfa0c19e not found: ID does not exist" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.644814 4778 scope.go:117] "RemoveContainer" containerID="46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b" Mar 12 14:26:10 crc kubenswrapper[4778]: E0312 14:26:10.645279 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b\": container with ID starting with 46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b not found: ID does not exist" containerID="46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.645299 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b"} err="failed to get container status \"46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b\": rpc error: code = NotFound desc = could not find container \"46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b\": container with ID starting with 46e87b0f62347e625a15fc108a812fa5374c31ccff16610cc6ccaccbc880dc3b not found: ID does not exist" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.645312 4778 scope.go:117] "RemoveContainer" containerID="72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380" Mar 12 14:26:10 crc kubenswrapper[4778]: E0312 14:26:10.645659 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380\": container with ID starting with 72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380 not found: ID does not exist" containerID="72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.645678 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380"} err="failed to get container status \"72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380\": rpc error: code = NotFound desc = could not find container \"72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380\": container with ID starting with 72109e62fc84d8faf8627ad866064d05143b1f8c544f417bf67c5efdac624380 not found: ID does not exist" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.677981 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afc8bbf3-0297-4e5a-ba30-a8fac38c3832" (UID: "afc8bbf3-0297-4e5a-ba30-a8fac38c3832"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.722954 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.722992 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:26:10 crc kubenswrapper[4778]: I0312 14:26:10.723003 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf2x6\" (UniqueName: \"kubernetes.io/projected/afc8bbf3-0297-4e5a-ba30-a8fac38c3832-kube-api-access-kf2x6\") on node \"crc\" DevicePath \"\"" Mar 12 14:26:11 crc kubenswrapper[4778]: I0312 14:26:11.341909 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5k5f" event={"ID":"afc8bbf3-0297-4e5a-ba30-a8fac38c3832","Type":"ContainerDied","Data":"32a087503db88c25852e194ce746844351fc2c42e470484a64156247f43bc6ee"} Mar 12 14:26:11 crc kubenswrapper[4778]: I0312 14:26:11.342271 4778 scope.go:117] "RemoveContainer" containerID="7996f5c983dd5f46d694007de45d6212a50151bb7f287388b92acc9fa36b446c" Mar 12 14:26:11 crc kubenswrapper[4778]: I0312 14:26:11.341953 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5k5f" Mar 12 14:26:11 crc kubenswrapper[4778]: I0312 14:26:11.368216 4778 scope.go:117] "RemoveContainer" containerID="089be0f6a7588cf95d5a557ff5fcbb10a29194745bfdb53617ac1aedce8267e2" Mar 12 14:26:11 crc kubenswrapper[4778]: I0312 14:26:11.383045 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5k5f"] Mar 12 14:26:11 crc kubenswrapper[4778]: I0312 14:26:11.396732 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5k5f"] Mar 12 14:26:11 crc kubenswrapper[4778]: I0312 14:26:11.399617 4778 scope.go:117] "RemoveContainer" containerID="9049643c60cdd1df5b7a09b9be7298790ce9d88b86df75d5b1bc6953a508f058" Mar 12 14:26:12 crc kubenswrapper[4778]: I0312 14:26:12.278866 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" path="/var/lib/kubelet/pods/21759d07-307f-4331-94fd-5e4720ef2b7f/volumes" Mar 12 14:26:12 crc kubenswrapper[4778]: I0312 14:26:12.279912 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" path="/var/lib/kubelet/pods/afc8bbf3-0297-4e5a-ba30-a8fac38c3832/volumes" Mar 12 14:26:17 crc kubenswrapper[4778]: I0312 14:26:17.254235 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:26:17 crc kubenswrapper[4778]: E0312 14:26:17.254988 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:26:22 crc kubenswrapper[4778]: I0312 14:26:22.319378 4778 scope.go:117] "RemoveContainer" containerID="45d8e22a4c6b9a2b198c09597b6bc6f24b127ce1a5abca778cee677c28671528" Mar 12 14:26:28 crc kubenswrapper[4778]: I0312 14:26:28.253821 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:26:28 crc kubenswrapper[4778]: E0312 14:26:28.254621 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:26:39 crc kubenswrapper[4778]: I0312 14:26:39.254467 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:26:39 crc kubenswrapper[4778]: E0312 14:26:39.255169 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:26:40 crc kubenswrapper[4778]: I0312 14:26:40.527171 4778 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod21759d07-307f-4331-94fd-5e4720ef2b7f"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod21759d07-307f-4331-94fd-5e4720ef2b7f] : Timed out while waiting for systemd to remove kubepods-burstable-pod21759d07_307f_4331_94fd_5e4720ef2b7f.slice" Mar 12 14:26:52 crc kubenswrapper[4778]: I0312 14:26:52.253865 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:26:52 crc kubenswrapper[4778]: E0312 14:26:52.254801 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:27:07 crc kubenswrapper[4778]: I0312 14:27:07.253776 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:27:07 crc kubenswrapper[4778]: E0312 14:27:07.254647 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:27:22 crc kubenswrapper[4778]: I0312 14:27:22.260415 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:27:22 crc kubenswrapper[4778]: E0312 14:27:22.261223 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:27:33 crc kubenswrapper[4778]: I0312 14:27:33.254178 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:27:34 crc kubenswrapper[4778]: I0312 14:27:34.082882 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"6a1ec993be9e8a6473b90c3546089f31fa9bb55d6d9459c21a9b96e0f0006f55"} Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.145654 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555428-88vs6"] Mar 12 14:28:00 crc kubenswrapper[4778]: E0312 14:28:00.146761 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerName="registry-server" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.146781 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerName="registry-server" Mar 12 14:28:00 crc kubenswrapper[4778]: E0312 14:28:00.146802 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerName="extract-utilities" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.146811 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerName="extract-utilities" Mar 12 14:28:00 crc kubenswrapper[4778]: E0312 14:28:00.146822 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerName="registry-server" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.146831 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerName="registry-server" Mar 12 14:28:00 crc kubenswrapper[4778]: E0312 14:28:00.146852 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerName="extract-content" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.146861 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerName="extract-content" Mar 12 14:28:00 crc kubenswrapper[4778]: E0312 14:28:00.146878 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerName="extract-content" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.146884 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerName="extract-content" Mar 12 14:28:00 crc kubenswrapper[4778]: E0312 14:28:00.146901 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerName="extract-utilities" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.146908 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerName="extract-utilities" Mar 12 14:28:00 crc kubenswrapper[4778]: E0312 14:28:00.146925 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa45db55-92a4-4a16-9455-ee110dc34fa6" containerName="oc" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.146936 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa45db55-92a4-4a16-9455-ee110dc34fa6" containerName="oc" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.147235 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="21759d07-307f-4331-94fd-5e4720ef2b7f" containerName="registry-server" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.147254 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc8bbf3-0297-4e5a-ba30-a8fac38c3832" containerName="registry-server" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.147275 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa45db55-92a4-4a16-9455-ee110dc34fa6" containerName="oc" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.148021 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555428-88vs6" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.150075 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.150393 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.157868 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.160292 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555428-88vs6"] Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.258523 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm79s\" (UniqueName: \"kubernetes.io/projected/071affdc-5584-4c5b-bfc8-f4e23c328d71-kube-api-access-jm79s\") pod \"auto-csr-approver-29555428-88vs6\" (UID: \"071affdc-5584-4c5b-bfc8-f4e23c328d71\") " pod="openshift-infra/auto-csr-approver-29555428-88vs6" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.360537 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm79s\" (UniqueName: \"kubernetes.io/projected/071affdc-5584-4c5b-bfc8-f4e23c328d71-kube-api-access-jm79s\") pod \"auto-csr-approver-29555428-88vs6\" (UID: \"071affdc-5584-4c5b-bfc8-f4e23c328d71\") " pod="openshift-infra/auto-csr-approver-29555428-88vs6" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.388396 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm79s\" (UniqueName: \"kubernetes.io/projected/071affdc-5584-4c5b-bfc8-f4e23c328d71-kube-api-access-jm79s\") pod \"auto-csr-approver-29555428-88vs6\" (UID: \"071affdc-5584-4c5b-bfc8-f4e23c328d71\") " pod="openshift-infra/auto-csr-approver-29555428-88vs6" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.471277 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555428-88vs6" Mar 12 14:28:00 crc kubenswrapper[4778]: I0312 14:28:00.924234 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555428-88vs6"] Mar 12 14:28:01 crc kubenswrapper[4778]: I0312 14:28:01.349179 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555428-88vs6" event={"ID":"071affdc-5584-4c5b-bfc8-f4e23c328d71","Type":"ContainerStarted","Data":"8c5ed4a7613ef08f2ef273dc42fb87d76cc0502f91ca97ea93b219c2b5f0b8da"} Mar 12 14:28:02 crc kubenswrapper[4778]: I0312 14:28:02.359236 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555428-88vs6" event={"ID":"071affdc-5584-4c5b-bfc8-f4e23c328d71","Type":"ContainerStarted","Data":"077f3532831f67d79d381fa8fb2af0e2022a1d4bb5091cf24d239727a2077516"} Mar 12 14:28:02 crc kubenswrapper[4778]: I0312 14:28:02.382002 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555428-88vs6" podStartSLOduration=1.366560612 podStartE2EDuration="2.381973524s" podCreationTimestamp="2026-03-12 14:28:00 +0000 UTC" firstStartedPulling="2026-03-12 14:28:00.936732708 +0000 UTC m=+4699.385428104" lastFinishedPulling="2026-03-12 14:28:01.95214562 +0000 UTC m=+4700.400841016" observedRunningTime="2026-03-12 14:28:02.37796651 +0000 UTC m=+4700.826661916" watchObservedRunningTime="2026-03-12 14:28:02.381973524 +0000 UTC m=+4700.830668940" Mar 12 14:28:03 crc kubenswrapper[4778]: I0312 14:28:03.369457 4778 generic.go:334] "Generic (PLEG): container finished" podID="071affdc-5584-4c5b-bfc8-f4e23c328d71" containerID="077f3532831f67d79d381fa8fb2af0e2022a1d4bb5091cf24d239727a2077516" exitCode=0 Mar 12 14:28:03 crc kubenswrapper[4778]: I0312 14:28:03.369519 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555428-88vs6" event={"ID":"071affdc-5584-4c5b-bfc8-f4e23c328d71","Type":"ContainerDied","Data":"077f3532831f67d79d381fa8fb2af0e2022a1d4bb5091cf24d239727a2077516"} Mar 12 14:28:04 crc kubenswrapper[4778]: I0312 14:28:04.845810 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555428-88vs6" Mar 12 14:28:04 crc kubenswrapper[4778]: I0312 14:28:04.956055 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm79s\" (UniqueName: \"kubernetes.io/projected/071affdc-5584-4c5b-bfc8-f4e23c328d71-kube-api-access-jm79s\") pod \"071affdc-5584-4c5b-bfc8-f4e23c328d71\" (UID: \"071affdc-5584-4c5b-bfc8-f4e23c328d71\") " Mar 12 14:28:04 crc kubenswrapper[4778]: I0312 14:28:04.963360 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071affdc-5584-4c5b-bfc8-f4e23c328d71-kube-api-access-jm79s" (OuterVolumeSpecName: "kube-api-access-jm79s") pod "071affdc-5584-4c5b-bfc8-f4e23c328d71" (UID: "071affdc-5584-4c5b-bfc8-f4e23c328d71"). InnerVolumeSpecName "kube-api-access-jm79s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:28:05 crc kubenswrapper[4778]: I0312 14:28:05.060163 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm79s\" (UniqueName: \"kubernetes.io/projected/071affdc-5584-4c5b-bfc8-f4e23c328d71-kube-api-access-jm79s\") on node \"crc\" DevicePath \"\"" Mar 12 14:28:05 crc kubenswrapper[4778]: I0312 14:28:05.386494 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555422-kw6c8"] Mar 12 14:28:05 crc kubenswrapper[4778]: I0312 14:28:05.389947 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555428-88vs6" event={"ID":"071affdc-5584-4c5b-bfc8-f4e23c328d71","Type":"ContainerDied","Data":"8c5ed4a7613ef08f2ef273dc42fb87d76cc0502f91ca97ea93b219c2b5f0b8da"} Mar 12 14:28:05 crc kubenswrapper[4778]: I0312 14:28:05.389991 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c5ed4a7613ef08f2ef273dc42fb87d76cc0502f91ca97ea93b219c2b5f0b8da" Mar 12 14:28:05 crc kubenswrapper[4778]: I0312 14:28:05.390059 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555428-88vs6" Mar 12 14:28:05 crc kubenswrapper[4778]: I0312 14:28:05.396695 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555422-kw6c8"] Mar 12 14:28:06 crc kubenswrapper[4778]: I0312 14:28:06.267019 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecab0458-0ee6-4672-bd27-4c8aae8427bb" path="/var/lib/kubelet/pods/ecab0458-0ee6-4672-bd27-4c8aae8427bb/volumes" Mar 12 14:28:22 crc kubenswrapper[4778]: I0312 14:28:22.434951 4778 scope.go:117] "RemoveContainer" containerID="460cb8eb02f9333998d559fe47fe50a7beb133708302defa156052aac3033d0e" Mar 12 14:29:58 crc kubenswrapper[4778]: I0312 14:29:58.557905 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:29:58 crc kubenswrapper[4778]: I0312 14:29:58.559545 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.168852 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl"] Mar 12 14:30:00 crc kubenswrapper[4778]: E0312 14:30:00.169821 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071affdc-5584-4c5b-bfc8-f4e23c328d71" containerName="oc" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.169838 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="071affdc-5584-4c5b-bfc8-f4e23c328d71" containerName="oc" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.170063 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="071affdc-5584-4c5b-bfc8-f4e23c328d71" containerName="oc" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.170946 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.173123 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.173578 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.180401 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555430-hqmdc"] Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.182023 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.184593 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.184772 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.184920 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.192357 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555430-hqmdc"] Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.203641 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl"] Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.290606 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjt9v\" (UniqueName: \"kubernetes.io/projected/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-kube-api-access-hjt9v\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.290715 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-config-volume\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.290759 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-secret-volume\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.290850 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpnj\" (UniqueName: \"kubernetes.io/projected/e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0-kube-api-access-rjpnj\") pod \"auto-csr-approver-29555430-hqmdc\" (UID: \"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0\") " pod="openshift-infra/auto-csr-approver-29555430-hqmdc" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.393071 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjt9v\" (UniqueName: \"kubernetes.io/projected/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-kube-api-access-hjt9v\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.393169 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-config-volume\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.393229 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-secret-volume\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.393318 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjpnj\" (UniqueName: \"kubernetes.io/projected/e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0-kube-api-access-rjpnj\") pod \"auto-csr-approver-29555430-hqmdc\" (UID: \"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0\") " pod="openshift-infra/auto-csr-approver-29555430-hqmdc" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.394647 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-config-volume\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.399975 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-secret-volume\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.413171 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjpnj\" (UniqueName: \"kubernetes.io/projected/e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0-kube-api-access-rjpnj\") pod \"auto-csr-approver-29555430-hqmdc\" (UID: \"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0\") " pod="openshift-infra/auto-csr-approver-29555430-hqmdc" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.414144 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjt9v\" (UniqueName: \"kubernetes.io/projected/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-kube-api-access-hjt9v\") pod \"collect-profiles-29555430-zhqfl\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.511358 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:00 crc kubenswrapper[4778]: I0312 14:30:00.526309 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" Mar 12 14:30:01 crc kubenswrapper[4778]: I0312 14:30:01.003421 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl"] Mar 12 14:30:01 crc kubenswrapper[4778]: I0312 14:30:01.012763 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555430-hqmdc"] Mar 12 14:30:01 crc kubenswrapper[4778]: I0312 14:30:01.461077 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" event={"ID":"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0","Type":"ContainerStarted","Data":"3394de743c569ccb49b14f1347f2a2010456b09cd4047ab06cda916362b6f570"} Mar 12 14:30:01 crc kubenswrapper[4778]: I0312 14:30:01.463562 4778 generic.go:334] "Generic (PLEG): container finished" podID="db4d57b8-99e5-4955-a6fe-9b0c0a6e61df" containerID="57a58448ac2691d1255487422cd2ce72ba1abcb298bf6c4ed12464fdb32a532d" exitCode=0 Mar 12 14:30:01 crc kubenswrapper[4778]: I0312 14:30:01.463592 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" event={"ID":"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df","Type":"ContainerDied","Data":"57a58448ac2691d1255487422cd2ce72ba1abcb298bf6c4ed12464fdb32a532d"} Mar 12 14:30:01 crc kubenswrapper[4778]: I0312 14:30:01.463614 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" event={"ID":"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df","Type":"ContainerStarted","Data":"5eb849e52a7c5c3d7b24d0990176f3f994944c8351988bfe9f8003384aceae71"} Mar 12 14:30:02 crc kubenswrapper[4778]: I0312 14:30:02.477113 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" event={"ID":"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0","Type":"ContainerStarted","Data":"202ec48bafe3f3236d534da77819d177d9a1fed914f316b780eda08b2d9dcd5e"} Mar 12 14:30:02 crc kubenswrapper[4778]: I0312 14:30:02.491206 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" podStartSLOduration=1.3572657559999999 podStartE2EDuration="2.491174366s" podCreationTimestamp="2026-03-12 14:30:00 +0000 UTC" firstStartedPulling="2026-03-12 14:30:01.018942752 +0000 UTC m=+4819.467638148" lastFinishedPulling="2026-03-12 14:30:02.152851362 +0000 UTC m=+4820.601546758" observedRunningTime="2026-03-12 14:30:02.490128446 +0000 UTC m=+4820.938823852" watchObservedRunningTime="2026-03-12 14:30:02.491174366 +0000 UTC m=+4820.939869762" Mar 12 14:30:02 crc kubenswrapper[4778]: I0312 14:30:02.990415 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.150148 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-config-volume\") pod \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.150345 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjt9v\" (UniqueName: \"kubernetes.io/projected/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-kube-api-access-hjt9v\") pod \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.150407 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-secret-volume\") pod \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\" (UID: \"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df\") " Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.151138 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-config-volume" (OuterVolumeSpecName: "config-volume") pod "db4d57b8-99e5-4955-a6fe-9b0c0a6e61df" (UID: "db4d57b8-99e5-4955-a6fe-9b0c0a6e61df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.155961 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-kube-api-access-hjt9v" (OuterVolumeSpecName: "kube-api-access-hjt9v") pod "db4d57b8-99e5-4955-a6fe-9b0c0a6e61df" (UID: "db4d57b8-99e5-4955-a6fe-9b0c0a6e61df"). InnerVolumeSpecName "kube-api-access-hjt9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.168487 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "db4d57b8-99e5-4955-a6fe-9b0c0a6e61df" (UID: "db4d57b8-99e5-4955-a6fe-9b0c0a6e61df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.253062 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjt9v\" (UniqueName: \"kubernetes.io/projected/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-kube-api-access-hjt9v\") on node \"crc\" DevicePath \"\"" Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.253109 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.253123 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.487913 4778 generic.go:334] "Generic (PLEG): container finished" podID="e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0" containerID="202ec48bafe3f3236d534da77819d177d9a1fed914f316b780eda08b2d9dcd5e" exitCode=0 Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.488018 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" event={"ID":"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0","Type":"ContainerDied","Data":"202ec48bafe3f3236d534da77819d177d9a1fed914f316b780eda08b2d9dcd5e"} Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.490161 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" event={"ID":"db4d57b8-99e5-4955-a6fe-9b0c0a6e61df","Type":"ContainerDied","Data":"5eb849e52a7c5c3d7b24d0990176f3f994944c8351988bfe9f8003384aceae71"} Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.490303 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb849e52a7c5c3d7b24d0990176f3f994944c8351988bfe9f8003384aceae71" Mar 12 14:30:03 crc kubenswrapper[4778]: I0312 14:30:03.490219 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl" Mar 12 14:30:04 crc kubenswrapper[4778]: I0312 14:30:04.086083 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh"] Mar 12 14:30:04 crc kubenswrapper[4778]: I0312 14:30:04.099492 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555385-qwzwh"] Mar 12 14:30:04 crc kubenswrapper[4778]: I0312 14:30:04.268342 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76005d52-2d02-4a1e-89dd-c050a66fe667" path="/var/lib/kubelet/pods/76005d52-2d02-4a1e-89dd-c050a66fe667/volumes" Mar 12 14:30:04 crc kubenswrapper[4778]: I0312 14:30:04.895379 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" Mar 12 14:30:04 crc kubenswrapper[4778]: I0312 14:30:04.990473 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjpnj\" (UniqueName: \"kubernetes.io/projected/e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0-kube-api-access-rjpnj\") pod \"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0\" (UID: \"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0\") " Mar 12 14:30:05 crc kubenswrapper[4778]: I0312 14:30:05.000930 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0-kube-api-access-rjpnj" (OuterVolumeSpecName: "kube-api-access-rjpnj") pod "e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0" (UID: "e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0"). InnerVolumeSpecName "kube-api-access-rjpnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:30:05 crc kubenswrapper[4778]: I0312 14:30:05.092294 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjpnj\" (UniqueName: \"kubernetes.io/projected/e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0-kube-api-access-rjpnj\") on node \"crc\" DevicePath \"\"" Mar 12 14:30:05 crc kubenswrapper[4778]: I0312 14:30:05.339857 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555424-5hp4x"] Mar 12 14:30:05 crc kubenswrapper[4778]: I0312 14:30:05.349044 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555424-5hp4x"] Mar 12 14:30:05 crc kubenswrapper[4778]: I0312 14:30:05.509426 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" event={"ID":"e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0","Type":"ContainerDied","Data":"3394de743c569ccb49b14f1347f2a2010456b09cd4047ab06cda916362b6f570"} Mar 12 14:30:05 crc kubenswrapper[4778]: I0312 14:30:05.509632 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3394de743c569ccb49b14f1347f2a2010456b09cd4047ab06cda916362b6f570" Mar 12 14:30:05 crc kubenswrapper[4778]: I0312 14:30:05.509734 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555430-hqmdc" Mar 12 14:30:06 crc kubenswrapper[4778]: I0312 14:30:06.265985 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c94941-b604-4fc0-b7b3-0b6749bbf233" path="/var/lib/kubelet/pods/78c94941-b604-4fc0-b7b3-0b6749bbf233/volumes" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.512984 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j2264"] Mar 12 14:30:20 crc kubenswrapper[4778]: E0312 14:30:20.513874 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4d57b8-99e5-4955-a6fe-9b0c0a6e61df" containerName="collect-profiles" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.513889 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4d57b8-99e5-4955-a6fe-9b0c0a6e61df" containerName="collect-profiles" Mar 12 14:30:20 crc kubenswrapper[4778]: E0312 14:30:20.513918 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0" containerName="oc" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.513925 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0" containerName="oc" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.514165 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0" containerName="oc" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.514203 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4d57b8-99e5-4955-a6fe-9b0c0a6e61df" containerName="collect-profiles" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.515796 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.541164 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2264"] Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.549822 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-utilities\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.549990 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-catalog-content\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.550050 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68l55\" (UniqueName: \"kubernetes.io/projected/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-kube-api-access-68l55\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.652970 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-utilities\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.653302 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-catalog-content\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.653630 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-utilities\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.653684 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-catalog-content\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.653788 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68l55\" (UniqueName: \"kubernetes.io/projected/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-kube-api-access-68l55\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.682544 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68l55\" (UniqueName: \"kubernetes.io/projected/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-kube-api-access-68l55\") pod \"redhat-marketplace-j2264\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:20 crc kubenswrapper[4778]: I0312 14:30:20.844699 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:21 crc kubenswrapper[4778]: I0312 14:30:21.336317 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2264"] Mar 12 14:30:21 crc kubenswrapper[4778]: I0312 14:30:21.658953 4778 generic.go:334] "Generic (PLEG): container finished" podID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerID="3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1" exitCode=0 Mar 12 14:30:21 crc kubenswrapper[4778]: I0312 14:30:21.659016 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2264" event={"ID":"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac","Type":"ContainerDied","Data":"3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1"} Mar 12 14:30:21 crc kubenswrapper[4778]: I0312 14:30:21.659320 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2264" event={"ID":"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac","Type":"ContainerStarted","Data":"746398f781b6a4873ac0e7ce0ef1a3ad903daadcf6e96df72afd204e8d36367d"} Mar 12 14:30:22 crc kubenswrapper[4778]: I0312 14:30:22.541705 4778 scope.go:117] "RemoveContainer" containerID="501e76905e9d2ff1f1e87040184d63ca0f219b530ef232d95f1fa4250e5ab145" Mar 12 14:30:22 crc kubenswrapper[4778]: I0312 14:30:22.574015 4778 scope.go:117] "RemoveContainer" containerID="a1c2cc27e654689e4f136031bb0129f78011ff7542f974149c494096b483a2a2" Mar 12 14:30:23 crc kubenswrapper[4778]: I0312 14:30:23.687797 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2264" event={"ID":"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac","Type":"ContainerStarted","Data":"ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba"} Mar 12 14:30:24 crc kubenswrapper[4778]: I0312 14:30:24.697061 4778 generic.go:334] "Generic (PLEG): container finished" podID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerID="ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba" exitCode=0 Mar 12 14:30:24 crc kubenswrapper[4778]: I0312 14:30:24.697130 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2264" event={"ID":"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac","Type":"ContainerDied","Data":"ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba"} Mar 12 14:30:25 crc kubenswrapper[4778]: I0312 14:30:25.716910 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2264" event={"ID":"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac","Type":"ContainerStarted","Data":"d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75"} Mar 12 14:30:25 crc kubenswrapper[4778]: I0312 14:30:25.755774 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j2264" podStartSLOduration=2.298192935 podStartE2EDuration="5.755751432s" podCreationTimestamp="2026-03-12 14:30:20 +0000 UTC" firstStartedPulling="2026-03-12 14:30:21.660988198 +0000 UTC m=+4840.109683594" lastFinishedPulling="2026-03-12 14:30:25.118546695 +0000 UTC m=+4843.567242091" observedRunningTime="2026-03-12 14:30:25.74546907 +0000 UTC m=+4844.194164486" watchObservedRunningTime="2026-03-12 14:30:25.755751432 +0000 UTC m=+4844.204446828" Mar 12 14:30:28 crc kubenswrapper[4778]: I0312 14:30:28.557878 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:30:28 crc kubenswrapper[4778]: I0312 14:30:28.557945 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:30:30 crc kubenswrapper[4778]: I0312 14:30:30.845397 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:30 crc kubenswrapper[4778]: I0312 14:30:30.846842 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:30 crc kubenswrapper[4778]: I0312 14:30:30.903027 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:31 crc kubenswrapper[4778]: I0312 14:30:31.846619 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:31 crc kubenswrapper[4778]: I0312 14:30:31.901598 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2264"] Mar 12 14:30:33 crc kubenswrapper[4778]: I0312 14:30:33.818288 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j2264" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerName="registry-server" containerID="cri-o://d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75" gracePeriod=2 Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.460580 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.558088 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-utilities\") pod \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.558139 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-catalog-content\") pod \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.558261 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68l55\" (UniqueName: \"kubernetes.io/projected/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-kube-api-access-68l55\") pod \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\" (UID: \"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac\") " Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.558874 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-utilities" (OuterVolumeSpecName: "utilities") pod "90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" (UID: "90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.565280 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-kube-api-access-68l55" (OuterVolumeSpecName: "kube-api-access-68l55") pod "90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" (UID: "90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac"). InnerVolumeSpecName "kube-api-access-68l55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.584199 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" (UID: "90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.662032 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.662102 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.662117 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68l55\" (UniqueName: \"kubernetes.io/projected/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac-kube-api-access-68l55\") on node \"crc\" DevicePath \"\"" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.826649 4778 generic.go:334] "Generic (PLEG): container finished" podID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerID="d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75" exitCode=0 Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.826693 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2264" event={"ID":"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac","Type":"ContainerDied","Data":"d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75"} Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.826726 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2264" event={"ID":"90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac","Type":"ContainerDied","Data":"746398f781b6a4873ac0e7ce0ef1a3ad903daadcf6e96df72afd204e8d36367d"} Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.826742 4778 scope.go:117] "RemoveContainer" containerID="d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.826863 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2264" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.875316 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2264"] Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.876900 4778 scope.go:117] "RemoveContainer" containerID="ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.893024 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2264"] Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.898309 4778 scope.go:117] "RemoveContainer" containerID="3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.943457 4778 scope.go:117] "RemoveContainer" containerID="d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75" Mar 12 14:30:34 crc kubenswrapper[4778]: E0312 14:30:34.945404 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75\": container with ID starting with d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75 not found: ID does not exist" containerID="d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.945444 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75"} err="failed to get container status \"d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75\": rpc error: code = NotFound desc = could not find container \"d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75\": container with ID starting with d3cb9f2ba446c850b76c89a7cd15f986deb091a3c6401ff8166b76a3b53b0e75 not found: ID does not exist" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.945470 4778 scope.go:117] "RemoveContainer" containerID="ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba" Mar 12 14:30:34 crc kubenswrapper[4778]: E0312 14:30:34.945765 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba\": container with ID starting with ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba not found: ID does not exist" containerID="ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.945787 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba"} err="failed to get container status \"ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba\": rpc error: code = NotFound desc = could not find container \"ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba\": container with ID starting with ac315bf62854bec40ed11865b47b388a0f0bfc385b3f9ee55b4d6a4a7c7fdfba not found: ID does not exist" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.945799 4778 scope.go:117] "RemoveContainer" containerID="3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1" Mar 12 14:30:34 crc kubenswrapper[4778]: E0312 14:30:34.946069 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1\": container with ID starting with 3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1 not found: ID does not exist" containerID="3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1" Mar 12 14:30:34 crc kubenswrapper[4778]: I0312 14:30:34.946091 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1"} err="failed to get container status \"3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1\": rpc error: code = NotFound desc = could not find container \"3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1\": container with ID starting with 3d6cd7c02a9fc8f9065eb1f58c0a4587d1fd25e46f09d296a3eb1bf674d206c1 not found: ID does not exist" Mar 12 14:30:36 crc kubenswrapper[4778]: I0312 14:30:36.267390 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" path="/var/lib/kubelet/pods/90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac/volumes" Mar 12 14:30:58 crc kubenswrapper[4778]: I0312 14:30:58.558354 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:30:58 crc kubenswrapper[4778]: I0312 14:30:58.558979 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:30:58 crc kubenswrapper[4778]: I0312 14:30:58.559038 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:30:58 crc kubenswrapper[4778]: I0312 14:30:58.559935 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a1ec993be9e8a6473b90c3546089f31fa9bb55d6d9459c21a9b96e0f0006f55"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:30:58 crc kubenswrapper[4778]: I0312 14:30:58.560005 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://6a1ec993be9e8a6473b90c3546089f31fa9bb55d6d9459c21a9b96e0f0006f55" gracePeriod=600 Mar 12 14:30:59 crc kubenswrapper[4778]: I0312 14:30:59.096854 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="6a1ec993be9e8a6473b90c3546089f31fa9bb55d6d9459c21a9b96e0f0006f55" exitCode=0 Mar 12 14:30:59 crc kubenswrapper[4778]: I0312 14:30:59.097367 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"6a1ec993be9e8a6473b90c3546089f31fa9bb55d6d9459c21a9b96e0f0006f55"} Mar 12 14:30:59 crc kubenswrapper[4778]: I0312 14:30:59.097439 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c"} Mar 12 14:30:59 crc kubenswrapper[4778]: I0312 14:30:59.097462 4778 scope.go:117] "RemoveContainer" containerID="994fa1e1cf0527d97bf647f1d2a50ed301bda64c2a862df7b100daec9859483a" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.324398 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wrns6"] Mar 12 14:31:32 crc kubenswrapper[4778]: E0312 14:31:32.326408 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerName="extract-content" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.326432 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerName="extract-content" Mar 12 14:31:32 crc kubenswrapper[4778]: E0312 14:31:32.326458 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerName="registry-server" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.326465 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerName="registry-server" Mar 12 14:31:32 crc kubenswrapper[4778]: E0312 14:31:32.326477 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerName="extract-utilities" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.326485 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerName="extract-utilities" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.327109 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d6add3-4eb8-45f0-afa1-cfca4fb1e6ac" containerName="registry-server" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.329843 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.332106 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wrns6"] Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.436502 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-catalog-content\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.436563 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4969\" (UniqueName: \"kubernetes.io/projected/75d88a85-f027-434a-9c11-e5c40cb64d16-kube-api-access-c4969\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.436606 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-utilities\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.537869 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-catalog-content\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.538368 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4969\" (UniqueName: \"kubernetes.io/projected/75d88a85-f027-434a-9c11-e5c40cb64d16-kube-api-access-c4969\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.538438 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-utilities\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.538437 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-catalog-content\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.538674 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-utilities\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.565077 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4969\" (UniqueName: \"kubernetes.io/projected/75d88a85-f027-434a-9c11-e5c40cb64d16-kube-api-access-c4969\") pod \"redhat-operators-wrns6\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:32 crc kubenswrapper[4778]: I0312 14:31:32.694411 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:33 crc kubenswrapper[4778]: I0312 14:31:33.129886 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wrns6"] Mar 12 14:31:33 crc kubenswrapper[4778]: I0312 14:31:33.418998 4778 generic.go:334] "Generic (PLEG): container finished" podID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerID="c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538" exitCode=0 Mar 12 14:31:33 crc kubenswrapper[4778]: I0312 14:31:33.419094 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrns6" event={"ID":"75d88a85-f027-434a-9c11-e5c40cb64d16","Type":"ContainerDied","Data":"c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538"} Mar 12 14:31:33 crc kubenswrapper[4778]: I0312 14:31:33.419350 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrns6" event={"ID":"75d88a85-f027-434a-9c11-e5c40cb64d16","Type":"ContainerStarted","Data":"1c7c3eb15ff3c903a138dcd8c30e7f50b560dc5fe6819c4ba964d113870db2f2"} Mar 12 14:31:33 crc kubenswrapper[4778]: I0312 14:31:33.420843 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:31:34 crc kubenswrapper[4778]: I0312 14:31:34.430601 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrns6" event={"ID":"75d88a85-f027-434a-9c11-e5c40cb64d16","Type":"ContainerStarted","Data":"8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351"} Mar 12 14:31:39 crc kubenswrapper[4778]: I0312 14:31:39.479214 4778 generic.go:334] "Generic (PLEG): container finished" podID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerID="8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351" exitCode=0 Mar 12 14:31:39 crc kubenswrapper[4778]: I0312 14:31:39.479306 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrns6" event={"ID":"75d88a85-f027-434a-9c11-e5c40cb64d16","Type":"ContainerDied","Data":"8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351"} Mar 12 14:31:40 crc kubenswrapper[4778]: I0312 14:31:40.494308 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrns6" event={"ID":"75d88a85-f027-434a-9c11-e5c40cb64d16","Type":"ContainerStarted","Data":"d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54"} Mar 12 14:31:40 crc kubenswrapper[4778]: I0312 14:31:40.525122 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wrns6" podStartSLOduration=1.9610731750000001 podStartE2EDuration="8.525097252s" podCreationTimestamp="2026-03-12 14:31:32 +0000 UTC" firstStartedPulling="2026-03-12 14:31:33.420611627 +0000 UTC m=+4911.869307023" lastFinishedPulling="2026-03-12 14:31:39.984635704 +0000 UTC m=+4918.433331100" observedRunningTime="2026-03-12 14:31:40.511609639 +0000 UTC m=+4918.960305045" watchObservedRunningTime="2026-03-12 14:31:40.525097252 +0000 UTC m=+4918.973792668" Mar 12 14:31:42 crc kubenswrapper[4778]: I0312 14:31:42.695159 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:42 crc kubenswrapper[4778]: I0312 14:31:42.696225 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:31:43 crc kubenswrapper[4778]: I0312 14:31:43.744106 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wrns6" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="registry-server" probeResult="failure" output=< Mar 12 14:31:43 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:31:43 crc kubenswrapper[4778]: > Mar 12 14:31:53 crc kubenswrapper[4778]: I0312 14:31:53.741263 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wrns6" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="registry-server" probeResult="failure" output=< Mar 12 14:31:53 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:31:53 crc kubenswrapper[4778]: > Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.152744 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555432-94dlz"] Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.154692 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555432-94dlz" Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.156463 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.156840 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.157048 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.166330 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555432-94dlz"] Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.261220 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5md2x\" (UniqueName: \"kubernetes.io/projected/4b6391c3-533c-4b44-b1be-2a5c9752ba4b-kube-api-access-5md2x\") pod \"auto-csr-approver-29555432-94dlz\" (UID: \"4b6391c3-533c-4b44-b1be-2a5c9752ba4b\") " pod="openshift-infra/auto-csr-approver-29555432-94dlz" Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.363646 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5md2x\" (UniqueName: \"kubernetes.io/projected/4b6391c3-533c-4b44-b1be-2a5c9752ba4b-kube-api-access-5md2x\") pod \"auto-csr-approver-29555432-94dlz\" (UID: \"4b6391c3-533c-4b44-b1be-2a5c9752ba4b\") " pod="openshift-infra/auto-csr-approver-29555432-94dlz" Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.383372 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5md2x\" (UniqueName: \"kubernetes.io/projected/4b6391c3-533c-4b44-b1be-2a5c9752ba4b-kube-api-access-5md2x\") pod \"auto-csr-approver-29555432-94dlz\" (UID: \"4b6391c3-533c-4b44-b1be-2a5c9752ba4b\") " pod="openshift-infra/auto-csr-approver-29555432-94dlz" Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.473873 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555432-94dlz" Mar 12 14:32:00 crc kubenswrapper[4778]: I0312 14:32:00.965310 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555432-94dlz"] Mar 12 14:32:01 crc kubenswrapper[4778]: I0312 14:32:01.668479 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555432-94dlz" event={"ID":"4b6391c3-533c-4b44-b1be-2a5c9752ba4b","Type":"ContainerStarted","Data":"d4b6c9d11eb3ff84a8caa2505ac01209847a1935674926fa5a2116afc7245310"} Mar 12 14:32:02 crc kubenswrapper[4778]: I0312 14:32:02.744465 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:32:02 crc kubenswrapper[4778]: I0312 14:32:02.812727 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:32:03 crc kubenswrapper[4778]: I0312 14:32:03.510136 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wrns6"] Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:04.696291 4778 generic.go:334] "Generic (PLEG): container finished" podID="4b6391c3-533c-4b44-b1be-2a5c9752ba4b" containerID="f68c8ed6b7c2e6259023580b179d97b5ef4d89ae76842473f005cc28f0933cea" exitCode=0 Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:04.697037 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wrns6" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="registry-server" containerID="cri-o://d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54" gracePeriod=2 Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:04.697173 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555432-94dlz" event={"ID":"4b6391c3-533c-4b44-b1be-2a5c9752ba4b","Type":"ContainerDied","Data":"f68c8ed6b7c2e6259023580b179d97b5ef4d89ae76842473f005cc28f0933cea"} Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.284896 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.456439 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4969\" (UniqueName: \"kubernetes.io/projected/75d88a85-f027-434a-9c11-e5c40cb64d16-kube-api-access-c4969\") pod \"75d88a85-f027-434a-9c11-e5c40cb64d16\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.456792 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-catalog-content\") pod \"75d88a85-f027-434a-9c11-e5c40cb64d16\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.456932 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-utilities\") pod \"75d88a85-f027-434a-9c11-e5c40cb64d16\" (UID: \"75d88a85-f027-434a-9c11-e5c40cb64d16\") " Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.458530 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-utilities" (OuterVolumeSpecName: "utilities") pod "75d88a85-f027-434a-9c11-e5c40cb64d16" (UID: "75d88a85-f027-434a-9c11-e5c40cb64d16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.463260 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d88a85-f027-434a-9c11-e5c40cb64d16-kube-api-access-c4969" (OuterVolumeSpecName: "kube-api-access-c4969") pod "75d88a85-f027-434a-9c11-e5c40cb64d16" (UID: "75d88a85-f027-434a-9c11-e5c40cb64d16"). InnerVolumeSpecName "kube-api-access-c4969". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.558987 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.559022 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4969\" (UniqueName: \"kubernetes.io/projected/75d88a85-f027-434a-9c11-e5c40cb64d16-kube-api-access-c4969\") on node \"crc\" DevicePath \"\"" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.587252 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75d88a85-f027-434a-9c11-e5c40cb64d16" (UID: "75d88a85-f027-434a-9c11-e5c40cb64d16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.660766 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d88a85-f027-434a-9c11-e5c40cb64d16-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.719870 4778 generic.go:334] "Generic (PLEG): container finished" podID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerID="d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54" exitCode=0 Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.720062 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrns6" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.722841 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrns6" event={"ID":"75d88a85-f027-434a-9c11-e5c40cb64d16","Type":"ContainerDied","Data":"d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54"} Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.722898 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrns6" event={"ID":"75d88a85-f027-434a-9c11-e5c40cb64d16","Type":"ContainerDied","Data":"1c7c3eb15ff3c903a138dcd8c30e7f50b560dc5fe6819c4ba964d113870db2f2"} Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.722927 4778 scope.go:117] "RemoveContainer" containerID="d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54" Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.762247 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wrns6"] Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.771258 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wrns6"] Mar 12 14:32:05 crc kubenswrapper[4778]: I0312 14:32:05.773227 4778 scope.go:117] "RemoveContainer" containerID="8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.154011 4778 scope.go:117] "RemoveContainer" containerID="c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.236120 4778 scope.go:117] "RemoveContainer" containerID="d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54" Mar 12 14:32:06 crc kubenswrapper[4778]: E0312 14:32:06.236678 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54\": container with ID starting with d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54 not found: ID does not exist" containerID="d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.236747 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54"} err="failed to get container status \"d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54\": rpc error: code = NotFound desc = could not find container \"d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54\": container with ID starting with d303c9bc7a74d64545141c564ba3f0be945f88b7f92e9057d960eb47fc781a54 not found: ID does not exist" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.236788 4778 scope.go:117] "RemoveContainer" containerID="8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351" Mar 12 14:32:06 crc kubenswrapper[4778]: E0312 14:32:06.237139 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351\": container with ID starting with 8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351 not found: ID does not exist" containerID="8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.237176 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351"} err="failed to get container status \"8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351\": rpc error: code = NotFound desc = could not find container \"8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351\": container with ID starting with 8db1f0fdafeaee661df2b6ada567ab3179ef90cc4d4e57d242e9a091b1bfe351 not found: ID does not exist" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.237209 4778 scope.go:117] "RemoveContainer" containerID="c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538" Mar 12 14:32:06 crc kubenswrapper[4778]: E0312 14:32:06.237446 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538\": container with ID starting with c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538 not found: ID does not exist" containerID="c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.237473 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538"} err="failed to get container status \"c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538\": rpc error: code = NotFound desc = could not find container \"c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538\": container with ID starting with c68406c6882aec720e9611447e1ccc8988ea75684e147b1609224dcddd31d538 not found: ID does not exist" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.264495 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" path="/var/lib/kubelet/pods/75d88a85-f027-434a-9c11-e5c40cb64d16/volumes" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.276419 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555432-94dlz" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.374576 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5md2x\" (UniqueName: \"kubernetes.io/projected/4b6391c3-533c-4b44-b1be-2a5c9752ba4b-kube-api-access-5md2x\") pod \"4b6391c3-533c-4b44-b1be-2a5c9752ba4b\" (UID: \"4b6391c3-533c-4b44-b1be-2a5c9752ba4b\") " Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.379521 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6391c3-533c-4b44-b1be-2a5c9752ba4b-kube-api-access-5md2x" (OuterVolumeSpecName: "kube-api-access-5md2x") pod "4b6391c3-533c-4b44-b1be-2a5c9752ba4b" (UID: "4b6391c3-533c-4b44-b1be-2a5c9752ba4b"). InnerVolumeSpecName "kube-api-access-5md2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.477890 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5md2x\" (UniqueName: \"kubernetes.io/projected/4b6391c3-533c-4b44-b1be-2a5c9752ba4b-kube-api-access-5md2x\") on node \"crc\" DevicePath \"\"" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.729933 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555432-94dlz" event={"ID":"4b6391c3-533c-4b44-b1be-2a5c9752ba4b","Type":"ContainerDied","Data":"d4b6c9d11eb3ff84a8caa2505ac01209847a1935674926fa5a2116afc7245310"} Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.730567 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b6c9d11eb3ff84a8caa2505ac01209847a1935674926fa5a2116afc7245310" Mar 12 14:32:06 crc kubenswrapper[4778]: I0312 14:32:06.729978 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555432-94dlz" Mar 12 14:32:07 crc kubenswrapper[4778]: I0312 14:32:07.358894 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555426-btwds"] Mar 12 14:32:07 crc kubenswrapper[4778]: I0312 14:32:07.368527 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555426-btwds"] Mar 12 14:32:08 crc kubenswrapper[4778]: I0312 14:32:08.264976 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa45db55-92a4-4a16-9455-ee110dc34fa6" path="/var/lib/kubelet/pods/fa45db55-92a4-4a16-9455-ee110dc34fa6/volumes" Mar 12 14:32:22 crc kubenswrapper[4778]: I0312 14:32:22.735310 4778 scope.go:117] "RemoveContainer" containerID="188dd1cb886e6788ffd8398573fda57dc92b1fe481e6f2ffdc97a0e049e9348c" Mar 12 14:32:58 crc kubenswrapper[4778]: I0312 14:32:58.557599 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:32:58 crc kubenswrapper[4778]: I0312 14:32:58.558307 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:33:28 crc kubenswrapper[4778]: I0312 14:33:28.557333 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:33:28 crc kubenswrapper[4778]: I0312 14:33:28.557966 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.558459 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.559308 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.559397 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.560751 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.560892 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" gracePeriod=600 Mar 12 14:33:58 crc kubenswrapper[4778]: E0312 14:33:58.701486 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.721352 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" exitCode=0 Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.721393 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c"} Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.721425 4778 scope.go:117] "RemoveContainer" containerID="6a1ec993be9e8a6473b90c3546089f31fa9bb55d6d9459c21a9b96e0f0006f55" Mar 12 14:33:58 crc kubenswrapper[4778]: I0312 14:33:58.722367 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:33:58 crc kubenswrapper[4778]: E0312 14:33:58.722890 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.147870 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555434-hzddc"] Mar 12 14:34:00 crc kubenswrapper[4778]: E0312 14:34:00.149560 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="extract-utilities" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.149578 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="extract-utilities" Mar 12 14:34:00 crc kubenswrapper[4778]: E0312 14:34:00.149618 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="extract-content" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.149628 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="extract-content" Mar 12 14:34:00 crc kubenswrapper[4778]: E0312 14:34:00.149648 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="registry-server" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.149656 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="registry-server" Mar 12 14:34:00 crc kubenswrapper[4778]: E0312 14:34:00.149667 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6391c3-533c-4b44-b1be-2a5c9752ba4b" containerName="oc" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.149674 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6391c3-533c-4b44-b1be-2a5c9752ba4b" containerName="oc" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.149910 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6391c3-533c-4b44-b1be-2a5c9752ba4b" containerName="oc" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.149942 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d88a85-f027-434a-9c11-e5c40cb64d16" containerName="registry-server" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.150723 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555434-hzddc" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.155392 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.155967 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.157992 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.181981 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555434-hzddc"] Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.272242 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6td7\" (UniqueName: \"kubernetes.io/projected/e7855ae5-9f57-4d62-ab01-d16ae9f5a037-kube-api-access-w6td7\") pod \"auto-csr-approver-29555434-hzddc\" (UID: \"e7855ae5-9f57-4d62-ab01-d16ae9f5a037\") " pod="openshift-infra/auto-csr-approver-29555434-hzddc" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.374457 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6td7\" (UniqueName: \"kubernetes.io/projected/e7855ae5-9f57-4d62-ab01-d16ae9f5a037-kube-api-access-w6td7\") pod \"auto-csr-approver-29555434-hzddc\" (UID: \"e7855ae5-9f57-4d62-ab01-d16ae9f5a037\") " pod="openshift-infra/auto-csr-approver-29555434-hzddc" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.397850 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6td7\" (UniqueName: \"kubernetes.io/projected/e7855ae5-9f57-4d62-ab01-d16ae9f5a037-kube-api-access-w6td7\") pod \"auto-csr-approver-29555434-hzddc\" (UID: \"e7855ae5-9f57-4d62-ab01-d16ae9f5a037\") " pod="openshift-infra/auto-csr-approver-29555434-hzddc" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.473509 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555434-hzddc" Mar 12 14:34:00 crc kubenswrapper[4778]: I0312 14:34:00.945167 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555434-hzddc"] Mar 12 14:34:01 crc kubenswrapper[4778]: I0312 14:34:01.751558 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555434-hzddc" event={"ID":"e7855ae5-9f57-4d62-ab01-d16ae9f5a037","Type":"ContainerStarted","Data":"d4fe264c15cb5db14849184e4b38afe0dbec8fa34e51673b8ea8ad0e2910d3db"} Mar 12 14:34:02 crc kubenswrapper[4778]: I0312 14:34:02.765141 4778 generic.go:334] "Generic (PLEG): container finished" podID="e7855ae5-9f57-4d62-ab01-d16ae9f5a037" containerID="ecf6cfdc210df01866b5bda8e874db3a9407a84531517ea05fb802b9d57bcdb0" exitCode=0 Mar 12 14:34:02 crc kubenswrapper[4778]: I0312 14:34:02.765210 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555434-hzddc" event={"ID":"e7855ae5-9f57-4d62-ab01-d16ae9f5a037","Type":"ContainerDied","Data":"ecf6cfdc210df01866b5bda8e874db3a9407a84531517ea05fb802b9d57bcdb0"} Mar 12 14:34:04 crc kubenswrapper[4778]: I0312 14:34:04.645662 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555434-hzddc" Mar 12 14:34:04 crc kubenswrapper[4778]: I0312 14:34:04.764274 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6td7\" (UniqueName: \"kubernetes.io/projected/e7855ae5-9f57-4d62-ab01-d16ae9f5a037-kube-api-access-w6td7\") pod \"e7855ae5-9f57-4d62-ab01-d16ae9f5a037\" (UID: \"e7855ae5-9f57-4d62-ab01-d16ae9f5a037\") " Mar 12 14:34:04 crc kubenswrapper[4778]: I0312 14:34:04.770219 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7855ae5-9f57-4d62-ab01-d16ae9f5a037-kube-api-access-w6td7" (OuterVolumeSpecName: "kube-api-access-w6td7") pod "e7855ae5-9f57-4d62-ab01-d16ae9f5a037" (UID: "e7855ae5-9f57-4d62-ab01-d16ae9f5a037"). InnerVolumeSpecName "kube-api-access-w6td7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:34:04 crc kubenswrapper[4778]: I0312 14:34:04.784606 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555434-hzddc" event={"ID":"e7855ae5-9f57-4d62-ab01-d16ae9f5a037","Type":"ContainerDied","Data":"d4fe264c15cb5db14849184e4b38afe0dbec8fa34e51673b8ea8ad0e2910d3db"} Mar 12 14:34:04 crc kubenswrapper[4778]: I0312 14:34:04.784650 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4fe264c15cb5db14849184e4b38afe0dbec8fa34e51673b8ea8ad0e2910d3db" Mar 12 14:34:04 crc kubenswrapper[4778]: I0312 14:34:04.784724 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555434-hzddc" Mar 12 14:34:04 crc kubenswrapper[4778]: I0312 14:34:04.867542 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6td7\" (UniqueName: \"kubernetes.io/projected/e7855ae5-9f57-4d62-ab01-d16ae9f5a037-kube-api-access-w6td7\") on node \"crc\" DevicePath \"\"" Mar 12 14:34:05 crc kubenswrapper[4778]: I0312 14:34:05.723216 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555428-88vs6"] Mar 12 14:34:05 crc kubenswrapper[4778]: I0312 14:34:05.735022 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555428-88vs6"] Mar 12 14:34:06 crc kubenswrapper[4778]: I0312 14:34:06.264994 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071affdc-5584-4c5b-bfc8-f4e23c328d71" path="/var/lib/kubelet/pods/071affdc-5584-4c5b-bfc8-f4e23c328d71/volumes" Mar 12 14:34:14 crc kubenswrapper[4778]: I0312 14:34:14.253581 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:34:14 crc kubenswrapper[4778]: E0312 14:34:14.254421 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:34:22 crc kubenswrapper[4778]: I0312 14:34:22.844694 4778 scope.go:117] "RemoveContainer" containerID="077f3532831f67d79d381fa8fb2af0e2022a1d4bb5091cf24d239727a2077516" Mar 12 14:34:26 crc kubenswrapper[4778]: I0312 14:34:26.254560 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:34:26 crc kubenswrapper[4778]: E0312 14:34:26.255387 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:34:40 crc kubenswrapper[4778]: I0312 14:34:40.256230 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:34:40 crc kubenswrapper[4778]: E0312 14:34:40.256915 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:34:52 crc kubenswrapper[4778]: I0312 14:34:52.261284 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:34:52 crc kubenswrapper[4778]: E0312 14:34:52.262071 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:35:04 crc kubenswrapper[4778]: I0312 14:35:04.254677 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:35:04 crc kubenswrapper[4778]: E0312 14:35:04.255520 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:35:15 crc kubenswrapper[4778]: I0312 14:35:15.255499 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:35:15 crc kubenswrapper[4778]: E0312 14:35:15.257752 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:35:27 crc kubenswrapper[4778]: I0312 14:35:27.254718 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:35:27 crc kubenswrapper[4778]: E0312 14:35:27.255948 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:35:38 crc kubenswrapper[4778]: I0312 14:35:38.254467 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:35:38 crc kubenswrapper[4778]: E0312 14:35:38.255124 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:35:50 crc kubenswrapper[4778]: I0312 14:35:50.254601 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:35:50 crc kubenswrapper[4778]: E0312 14:35:50.255537 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.144042 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555436-9bs4m"] Mar 12 14:36:00 crc kubenswrapper[4778]: E0312 14:36:00.145114 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7855ae5-9f57-4d62-ab01-d16ae9f5a037" containerName="oc" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.145131 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7855ae5-9f57-4d62-ab01-d16ae9f5a037" containerName="oc" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.145449 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7855ae5-9f57-4d62-ab01-d16ae9f5a037" containerName="oc" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.146220 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555436-9bs4m" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.148031 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.148292 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.150347 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.153773 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555436-9bs4m"] Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.280289 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwgwh\" (UniqueName: \"kubernetes.io/projected/45d1f962-b71e-4473-b387-137a395e1a39-kube-api-access-wwgwh\") pod \"auto-csr-approver-29555436-9bs4m\" (UID: \"45d1f962-b71e-4473-b387-137a395e1a39\") " pod="openshift-infra/auto-csr-approver-29555436-9bs4m" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.381951 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwgwh\" (UniqueName: \"kubernetes.io/projected/45d1f962-b71e-4473-b387-137a395e1a39-kube-api-access-wwgwh\") pod \"auto-csr-approver-29555436-9bs4m\" (UID: \"45d1f962-b71e-4473-b387-137a395e1a39\") " pod="openshift-infra/auto-csr-approver-29555436-9bs4m" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.404042 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwgwh\" (UniqueName: \"kubernetes.io/projected/45d1f962-b71e-4473-b387-137a395e1a39-kube-api-access-wwgwh\") pod \"auto-csr-approver-29555436-9bs4m\" (UID: \"45d1f962-b71e-4473-b387-137a395e1a39\") " pod="openshift-infra/auto-csr-approver-29555436-9bs4m" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.467920 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555436-9bs4m" Mar 12 14:36:00 crc kubenswrapper[4778]: I0312 14:36:00.910802 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555436-9bs4m"] Mar 12 14:36:01 crc kubenswrapper[4778]: W0312 14:36:01.032498 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d1f962_b71e_4473_b387_137a395e1a39.slice/crio-46c218281802b0c8f5d068642d635879a43981b837f5f8787e72274312e00084 WatchSource:0}: Error finding container 46c218281802b0c8f5d068642d635879a43981b837f5f8787e72274312e00084: Status 404 returned error can't find the container with id 46c218281802b0c8f5d068642d635879a43981b837f5f8787e72274312e00084 Mar 12 14:36:01 crc kubenswrapper[4778]: I0312 14:36:01.885436 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555436-9bs4m" event={"ID":"45d1f962-b71e-4473-b387-137a395e1a39","Type":"ContainerStarted","Data":"46c218281802b0c8f5d068642d635879a43981b837f5f8787e72274312e00084"} Mar 12 14:36:02 crc kubenswrapper[4778]: I0312 14:36:02.896758 4778 generic.go:334] "Generic (PLEG): container finished" podID="45d1f962-b71e-4473-b387-137a395e1a39" containerID="b4d039fad9b993f652c5f6f0f661f085d4f93b47467c47a0fe13959b9f367b5d" exitCode=0 Mar 12 14:36:02 crc kubenswrapper[4778]: I0312 14:36:02.896850 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555436-9bs4m" event={"ID":"45d1f962-b71e-4473-b387-137a395e1a39","Type":"ContainerDied","Data":"b4d039fad9b993f652c5f6f0f661f085d4f93b47467c47a0fe13959b9f367b5d"} Mar 12 14:36:04 crc kubenswrapper[4778]: I0312 14:36:04.342941 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555436-9bs4m" Mar 12 14:36:04 crc kubenswrapper[4778]: I0312 14:36:04.468793 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwgwh\" (UniqueName: \"kubernetes.io/projected/45d1f962-b71e-4473-b387-137a395e1a39-kube-api-access-wwgwh\") pod \"45d1f962-b71e-4473-b387-137a395e1a39\" (UID: \"45d1f962-b71e-4473-b387-137a395e1a39\") " Mar 12 14:36:04 crc kubenswrapper[4778]: I0312 14:36:04.477080 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d1f962-b71e-4473-b387-137a395e1a39-kube-api-access-wwgwh" (OuterVolumeSpecName: "kube-api-access-wwgwh") pod "45d1f962-b71e-4473-b387-137a395e1a39" (UID: "45d1f962-b71e-4473-b387-137a395e1a39"). InnerVolumeSpecName "kube-api-access-wwgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:36:04 crc kubenswrapper[4778]: I0312 14:36:04.571873 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwgwh\" (UniqueName: \"kubernetes.io/projected/45d1f962-b71e-4473-b387-137a395e1a39-kube-api-access-wwgwh\") on node \"crc\" DevicePath \"\"" Mar 12 14:36:04 crc kubenswrapper[4778]: I0312 14:36:04.939885 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555436-9bs4m" event={"ID":"45d1f962-b71e-4473-b387-137a395e1a39","Type":"ContainerDied","Data":"46c218281802b0c8f5d068642d635879a43981b837f5f8787e72274312e00084"} Mar 12 14:36:04 crc kubenswrapper[4778]: I0312 14:36:04.939936 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46c218281802b0c8f5d068642d635879a43981b837f5f8787e72274312e00084" Mar 12 14:36:04 crc kubenswrapper[4778]: I0312 14:36:04.940000 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555436-9bs4m" Mar 12 14:36:05 crc kubenswrapper[4778]: I0312 14:36:05.254262 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:36:05 crc kubenswrapper[4778]: E0312 14:36:05.254475 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:36:05 crc kubenswrapper[4778]: I0312 14:36:05.410834 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555430-hqmdc"] Mar 12 14:36:05 crc kubenswrapper[4778]: I0312 14:36:05.425369 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555430-hqmdc"] Mar 12 14:36:06 crc kubenswrapper[4778]: I0312 14:36:06.267114 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0" path="/var/lib/kubelet/pods/e89b5d9b-fc4f-4dd6-aae9-9d1ee7b3edb0/volumes" Mar 12 14:36:16 crc kubenswrapper[4778]: I0312 14:36:16.254567 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:36:16 crc kubenswrapper[4778]: E0312 14:36:16.256922 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:36:22 crc kubenswrapper[4778]: I0312 14:36:22.936109 4778 scope.go:117] "RemoveContainer" containerID="202ec48bafe3f3236d534da77819d177d9a1fed914f316b780eda08b2d9dcd5e" Mar 12 14:36:29 crc kubenswrapper[4778]: I0312 14:36:29.870286 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bthl5"] Mar 12 14:36:29 crc kubenswrapper[4778]: E0312 14:36:29.871496 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d1f962-b71e-4473-b387-137a395e1a39" containerName="oc" Mar 12 14:36:29 crc kubenswrapper[4778]: I0312 14:36:29.871512 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d1f962-b71e-4473-b387-137a395e1a39" containerName="oc" Mar 12 14:36:29 crc kubenswrapper[4778]: I0312 14:36:29.871713 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d1f962-b71e-4473-b387-137a395e1a39" containerName="oc" Mar 12 14:36:29 crc kubenswrapper[4778]: I0312 14:36:29.873483 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:29 crc kubenswrapper[4778]: I0312 14:36:29.885773 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bthl5"] Mar 12 14:36:29 crc kubenswrapper[4778]: I0312 14:36:29.983984 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-utilities\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:29 crc kubenswrapper[4778]: I0312 14:36:29.984294 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-catalog-content\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:29 crc kubenswrapper[4778]: I0312 14:36:29.984391 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2cq2\" (UniqueName: \"kubernetes.io/projected/9098edbc-6c4b-444b-8214-5848756ec94b-kube-api-access-p2cq2\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:30 crc kubenswrapper[4778]: I0312 14:36:30.087418 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-utilities\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:30 crc kubenswrapper[4778]: I0312 14:36:30.087892 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-catalog-content\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:30 crc kubenswrapper[4778]: I0312 14:36:30.088045 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-utilities\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:30 crc kubenswrapper[4778]: I0312 14:36:30.088144 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cq2\" (UniqueName: \"kubernetes.io/projected/9098edbc-6c4b-444b-8214-5848756ec94b-kube-api-access-p2cq2\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:30 crc kubenswrapper[4778]: I0312 14:36:30.088511 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-catalog-content\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:30 crc kubenswrapper[4778]: I0312 14:36:30.110318 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2cq2\" (UniqueName: \"kubernetes.io/projected/9098edbc-6c4b-444b-8214-5848756ec94b-kube-api-access-p2cq2\") pod \"community-operators-bthl5\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:30 crc kubenswrapper[4778]: I0312 14:36:30.205925 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:30 crc kubenswrapper[4778]: I0312 14:36:30.730052 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bthl5"] Mar 12 14:36:31 crc kubenswrapper[4778]: I0312 14:36:31.231770 4778 generic.go:334] "Generic (PLEG): container finished" podID="9098edbc-6c4b-444b-8214-5848756ec94b" containerID="f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682" exitCode=0 Mar 12 14:36:31 crc kubenswrapper[4778]: I0312 14:36:31.231828 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bthl5" event={"ID":"9098edbc-6c4b-444b-8214-5848756ec94b","Type":"ContainerDied","Data":"f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682"} Mar 12 14:36:31 crc kubenswrapper[4778]: I0312 14:36:31.232073 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bthl5" event={"ID":"9098edbc-6c4b-444b-8214-5848756ec94b","Type":"ContainerStarted","Data":"7e43af4c8ac9f109aea2498c7d43bec693ffd79761be06aa8860c32373c46a08"} Mar 12 14:36:31 crc kubenswrapper[4778]: I0312 14:36:31.253831 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:36:31 crc kubenswrapper[4778]: E0312 14:36:31.254080 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:36:36 crc kubenswrapper[4778]: I0312 14:36:36.304212 4778 generic.go:334] "Generic (PLEG): container finished" podID="9098edbc-6c4b-444b-8214-5848756ec94b" containerID="f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88" exitCode=0 Mar 12 14:36:36 crc kubenswrapper[4778]: I0312 14:36:36.304669 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bthl5" event={"ID":"9098edbc-6c4b-444b-8214-5848756ec94b","Type":"ContainerDied","Data":"f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88"} Mar 12 14:36:36 crc kubenswrapper[4778]: I0312 14:36:36.306525 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:36:37 crc kubenswrapper[4778]: I0312 14:36:37.316882 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bthl5" event={"ID":"9098edbc-6c4b-444b-8214-5848756ec94b","Type":"ContainerStarted","Data":"db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64"} Mar 12 14:36:37 crc kubenswrapper[4778]: I0312 14:36:37.343855 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bthl5" podStartSLOduration=2.786556618 podStartE2EDuration="8.343835425s" podCreationTimestamp="2026-03-12 14:36:29 +0000 UTC" firstStartedPulling="2026-03-12 14:36:31.233724718 +0000 UTC m=+5209.682420114" lastFinishedPulling="2026-03-12 14:36:36.791003515 +0000 UTC m=+5215.239698921" observedRunningTime="2026-03-12 14:36:37.336555938 +0000 UTC m=+5215.785251354" watchObservedRunningTime="2026-03-12 14:36:37.343835425 +0000 UTC m=+5215.792530821" Mar 12 14:36:40 crc kubenswrapper[4778]: I0312 14:36:40.206071 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:40 crc kubenswrapper[4778]: I0312 14:36:40.206416 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:40 crc kubenswrapper[4778]: I0312 14:36:40.263659 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:44 crc kubenswrapper[4778]: I0312 14:36:44.254286 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:36:44 crc kubenswrapper[4778]: E0312 14:36:44.255062 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:36:50 crc kubenswrapper[4778]: I0312 14:36:50.266697 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bthl5" Mar 12 14:36:50 crc kubenswrapper[4778]: I0312 14:36:50.352931 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bthl5"] Mar 12 14:36:50 crc kubenswrapper[4778]: I0312 14:36:50.400511 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbbfg"] Mar 12 14:36:50 crc kubenswrapper[4778]: I0312 14:36:50.400991 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zbbfg" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="registry-server" containerID="cri-o://ed078967e33cb44c74a365a9804f9a8509ee01d3f7a8039f9f7b8f3366ab7aae" gracePeriod=2 Mar 12 14:36:50 crc kubenswrapper[4778]: I0312 14:36:50.544454 4778 generic.go:334] "Generic (PLEG): container finished" podID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerID="ed078967e33cb44c74a365a9804f9a8509ee01d3f7a8039f9f7b8f3366ab7aae" exitCode=0 Mar 12 14:36:50 crc kubenswrapper[4778]: I0312 14:36:50.545177 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbbfg" event={"ID":"c8cc55b1-e6ed-4790-886c-fabe5917bf27","Type":"ContainerDied","Data":"ed078967e33cb44c74a365a9804f9a8509ee01d3f7a8039f9f7b8f3366ab7aae"} Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.272086 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbbfg" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.440510 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-catalog-content\") pod \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.440601 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d6fn\" (UniqueName: \"kubernetes.io/projected/c8cc55b1-e6ed-4790-886c-fabe5917bf27-kube-api-access-8d6fn\") pod \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.440647 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-utilities\") pod \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\" (UID: \"c8cc55b1-e6ed-4790-886c-fabe5917bf27\") " Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.445421 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-utilities" (OuterVolumeSpecName: "utilities") pod "c8cc55b1-e6ed-4790-886c-fabe5917bf27" (UID: "c8cc55b1-e6ed-4790-886c-fabe5917bf27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.448485 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cc55b1-e6ed-4790-886c-fabe5917bf27-kube-api-access-8d6fn" (OuterVolumeSpecName: "kube-api-access-8d6fn") pod "c8cc55b1-e6ed-4790-886c-fabe5917bf27" (UID: "c8cc55b1-e6ed-4790-886c-fabe5917bf27"). InnerVolumeSpecName "kube-api-access-8d6fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.525237 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8cc55b1-e6ed-4790-886c-fabe5917bf27" (UID: "c8cc55b1-e6ed-4790-886c-fabe5917bf27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.543547 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.543577 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d6fn\" (UniqueName: \"kubernetes.io/projected/c8cc55b1-e6ed-4790-886c-fabe5917bf27-kube-api-access-8d6fn\") on node \"crc\" DevicePath \"\"" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.543589 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8cc55b1-e6ed-4790-886c-fabe5917bf27-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.559442 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zbbfg" event={"ID":"c8cc55b1-e6ed-4790-886c-fabe5917bf27","Type":"ContainerDied","Data":"7cfbf75bc1bea8190b4fd8a7b4f36c4f8056d3512bf0a0494d17fb32c82abce1"} Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.559493 4778 scope.go:117] "RemoveContainer" containerID="ed078967e33cb44c74a365a9804f9a8509ee01d3f7a8039f9f7b8f3366ab7aae" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.559657 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zbbfg" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.594274 4778 scope.go:117] "RemoveContainer" containerID="bdf54c6d37ca16db7981b38aa8bdf481e8ce434ef1861261a6875f0a169c6607" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.618268 4778 scope.go:117] "RemoveContainer" containerID="a44a31875240c27026c8d5b3562efaf0a4ac960ee6a568ff9dac9567138bfecd" Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.642244 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zbbfg"] Mar 12 14:36:51 crc kubenswrapper[4778]: I0312 14:36:51.653952 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zbbfg"] Mar 12 14:36:52 crc kubenswrapper[4778]: I0312 14:36:52.268637 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" path="/var/lib/kubelet/pods/c8cc55b1-e6ed-4790-886c-fabe5917bf27/volumes" Mar 12 14:36:55 crc kubenswrapper[4778]: I0312 14:36:55.254088 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:36:55 crc kubenswrapper[4778]: E0312 14:36:55.255062 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.539428 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6xr4w"] Mar 12 14:36:57 crc kubenswrapper[4778]: E0312 14:36:57.540256 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="extract-content" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.540273 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="extract-content" Mar 12 14:36:57 crc kubenswrapper[4778]: E0312 14:36:57.540291 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="extract-utilities" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.540300 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="extract-utilities" Mar 12 14:36:57 crc kubenswrapper[4778]: E0312 14:36:57.540340 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="registry-server" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.540349 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="registry-server" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.540568 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cc55b1-e6ed-4790-886c-fabe5917bf27" containerName="registry-server" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.541930 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.567824 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xr4w"] Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.718135 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-catalog-content\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.718223 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-utilities\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.718269 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf5jk\" (UniqueName: \"kubernetes.io/projected/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-kube-api-access-sf5jk\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.819821 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-catalog-content\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.819894 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-utilities\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.819929 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf5jk\" (UniqueName: \"kubernetes.io/projected/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-kube-api-access-sf5jk\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.820519 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-catalog-content\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.820531 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-utilities\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.838778 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf5jk\" (UniqueName: \"kubernetes.io/projected/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-kube-api-access-sf5jk\") pod \"certified-operators-6xr4w\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:57 crc kubenswrapper[4778]: I0312 14:36:57.859904 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:36:58 crc kubenswrapper[4778]: I0312 14:36:58.452386 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6xr4w"] Mar 12 14:36:58 crc kubenswrapper[4778]: I0312 14:36:58.622543 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xr4w" event={"ID":"8cc6e361-05ad-401e-b5ab-2070ca8ec46c","Type":"ContainerStarted","Data":"64e2681b11aea9607bf55a800b77d410d04225bde783d6a66b217bb9ae3cf27c"} Mar 12 14:36:59 crc kubenswrapper[4778]: I0312 14:36:59.641719 4778 generic.go:334] "Generic (PLEG): container finished" podID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerID="71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144" exitCode=0 Mar 12 14:36:59 crc kubenswrapper[4778]: I0312 14:36:59.641956 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xr4w" event={"ID":"8cc6e361-05ad-401e-b5ab-2070ca8ec46c","Type":"ContainerDied","Data":"71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144"} Mar 12 14:37:00 crc kubenswrapper[4778]: I0312 14:37:00.652647 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xr4w" event={"ID":"8cc6e361-05ad-401e-b5ab-2070ca8ec46c","Type":"ContainerStarted","Data":"3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea"} Mar 12 14:37:02 crc kubenswrapper[4778]: I0312 14:37:02.671278 4778 generic.go:334] "Generic (PLEG): container finished" podID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerID="3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea" exitCode=0 Mar 12 14:37:02 crc kubenswrapper[4778]: I0312 14:37:02.671327 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xr4w" event={"ID":"8cc6e361-05ad-401e-b5ab-2070ca8ec46c","Type":"ContainerDied","Data":"3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea"} Mar 12 14:37:04 crc kubenswrapper[4778]: I0312 14:37:04.693276 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xr4w" event={"ID":"8cc6e361-05ad-401e-b5ab-2070ca8ec46c","Type":"ContainerStarted","Data":"0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53"} Mar 12 14:37:04 crc kubenswrapper[4778]: I0312 14:37:04.757445 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6xr4w" podStartSLOduration=4.242362802 podStartE2EDuration="7.757414081s" podCreationTimestamp="2026-03-12 14:36:57 +0000 UTC" firstStartedPulling="2026-03-12 14:36:59.644024474 +0000 UTC m=+5238.092719880" lastFinishedPulling="2026-03-12 14:37:03.159075763 +0000 UTC m=+5241.607771159" observedRunningTime="2026-03-12 14:37:04.708922392 +0000 UTC m=+5243.157617808" watchObservedRunningTime="2026-03-12 14:37:04.757414081 +0000 UTC m=+5243.206109507" Mar 12 14:37:07 crc kubenswrapper[4778]: I0312 14:37:07.860368 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:37:07 crc kubenswrapper[4778]: I0312 14:37:07.860728 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:37:07 crc kubenswrapper[4778]: I0312 14:37:07.912958 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:37:08 crc kubenswrapper[4778]: I0312 14:37:08.254076 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:37:08 crc kubenswrapper[4778]: E0312 14:37:08.254496 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:37:08 crc kubenswrapper[4778]: I0312 14:37:08.807876 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:37:08 crc kubenswrapper[4778]: I0312 14:37:08.866544 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xr4w"] Mar 12 14:37:10 crc kubenswrapper[4778]: I0312 14:37:10.749243 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6xr4w" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerName="registry-server" containerID="cri-o://0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53" gracePeriod=2 Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.362015 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.418743 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-catalog-content\") pod \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.419072 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-utilities\") pod \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.419305 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf5jk\" (UniqueName: \"kubernetes.io/projected/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-kube-api-access-sf5jk\") pod \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\" (UID: \"8cc6e361-05ad-401e-b5ab-2070ca8ec46c\") " Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.421599 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-utilities" (OuterVolumeSpecName: "utilities") pod "8cc6e361-05ad-401e-b5ab-2070ca8ec46c" (UID: "8cc6e361-05ad-401e-b5ab-2070ca8ec46c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.440495 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-kube-api-access-sf5jk" (OuterVolumeSpecName: "kube-api-access-sf5jk") pod "8cc6e361-05ad-401e-b5ab-2070ca8ec46c" (UID: "8cc6e361-05ad-401e-b5ab-2070ca8ec46c"). InnerVolumeSpecName "kube-api-access-sf5jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.522378 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.522425 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf5jk\" (UniqueName: \"kubernetes.io/projected/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-kube-api-access-sf5jk\") on node \"crc\" DevicePath \"\"" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.586955 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cc6e361-05ad-401e-b5ab-2070ca8ec46c" (UID: "8cc6e361-05ad-401e-b5ab-2070ca8ec46c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.625172 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cc6e361-05ad-401e-b5ab-2070ca8ec46c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.762435 4778 generic.go:334] "Generic (PLEG): container finished" podID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerID="0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53" exitCode=0 Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.762478 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xr4w" event={"ID":"8cc6e361-05ad-401e-b5ab-2070ca8ec46c","Type":"ContainerDied","Data":"0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53"} Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.762505 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6xr4w" event={"ID":"8cc6e361-05ad-401e-b5ab-2070ca8ec46c","Type":"ContainerDied","Data":"64e2681b11aea9607bf55a800b77d410d04225bde783d6a66b217bb9ae3cf27c"} Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.762530 4778 scope.go:117] "RemoveContainer" containerID="0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.762565 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6xr4w" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.801918 4778 scope.go:117] "RemoveContainer" containerID="3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea" Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.812599 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6xr4w"] Mar 12 14:37:11 crc kubenswrapper[4778]: I0312 14:37:11.820490 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6xr4w"] Mar 12 14:37:12 crc kubenswrapper[4778]: I0312 14:37:12.264632 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" path="/var/lib/kubelet/pods/8cc6e361-05ad-401e-b5ab-2070ca8ec46c/volumes" Mar 12 14:37:12 crc kubenswrapper[4778]: I0312 14:37:12.437767 4778 scope.go:117] "RemoveContainer" containerID="71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144" Mar 12 14:37:12 crc kubenswrapper[4778]: I0312 14:37:12.503041 4778 scope.go:117] "RemoveContainer" containerID="0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53" Mar 12 14:37:12 crc kubenswrapper[4778]: E0312 14:37:12.503495 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53\": container with ID starting with 0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53 not found: ID does not exist" containerID="0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53" Mar 12 14:37:12 crc kubenswrapper[4778]: I0312 14:37:12.503548 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53"} err="failed to get container status \"0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53\": rpc error: code = NotFound desc = could not find container \"0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53\": container with ID starting with 0291482883d3905bcf0f1677176123126d7b804149fde1e81c7ae29d92b03d53 not found: ID does not exist" Mar 12 14:37:12 crc kubenswrapper[4778]: I0312 14:37:12.503580 4778 scope.go:117] "RemoveContainer" containerID="3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea" Mar 12 14:37:12 crc kubenswrapper[4778]: E0312 14:37:12.503879 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea\": container with ID starting with 3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea not found: ID does not exist" containerID="3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea" Mar 12 14:37:12 crc kubenswrapper[4778]: I0312 14:37:12.503910 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea"} err="failed to get container status \"3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea\": rpc error: code = NotFound desc = could not find container \"3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea\": container with ID starting with 3f6e8c4ba226104d47c3abd66b373a0394b62e7f03fdaa0463457d44bb5412ea not found: ID does not exist" Mar 12 14:37:12 crc kubenswrapper[4778]: I0312 14:37:12.503929 4778 scope.go:117] "RemoveContainer" containerID="71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144" Mar 12 14:37:12 crc kubenswrapper[4778]: E0312 14:37:12.504150 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144\": container with ID starting with 71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144 not found: ID does not exist" containerID="71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144" Mar 12 14:37:12 crc kubenswrapper[4778]: I0312 14:37:12.504193 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144"} err="failed to get container status \"71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144\": rpc error: code = NotFound desc = could not find container \"71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144\": container with ID starting with 71033b90ec1c23b5bc0edc20bbd939347abc398d65871787dd9105b4cea19144 not found: ID does not exist" Mar 12 14:37:19 crc kubenswrapper[4778]: I0312 14:37:19.254766 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:37:19 crc kubenswrapper[4778]: E0312 14:37:19.256902 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:37:32 crc kubenswrapper[4778]: I0312 14:37:32.260319 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:37:32 crc kubenswrapper[4778]: E0312 14:37:32.261232 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:37:45 crc kubenswrapper[4778]: I0312 14:37:45.253756 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:37:45 crc kubenswrapper[4778]: E0312 14:37:45.254559 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.145304 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555438-86rh2"] Mar 12 14:38:00 crc kubenswrapper[4778]: E0312 14:38:00.146240 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerName="extract-utilities" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.146253 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerName="extract-utilities" Mar 12 14:38:00 crc kubenswrapper[4778]: E0312 14:38:00.146274 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerName="extract-content" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.146281 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerName="extract-content" Mar 12 14:38:00 crc kubenswrapper[4778]: E0312 14:38:00.146292 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerName="registry-server" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.146299 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerName="registry-server" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.146493 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc6e361-05ad-401e-b5ab-2070ca8ec46c" containerName="registry-server" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.147192 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555438-86rh2" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.149861 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.149971 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.156903 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.158421 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555438-86rh2"] Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.254601 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:38:00 crc kubenswrapper[4778]: E0312 14:38:00.254854 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.281531 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95rwl\" (UniqueName: \"kubernetes.io/projected/bfdb06b1-1cad-4645-bf9c-1859648637ea-kube-api-access-95rwl\") pod \"auto-csr-approver-29555438-86rh2\" (UID: \"bfdb06b1-1cad-4645-bf9c-1859648637ea\") " pod="openshift-infra/auto-csr-approver-29555438-86rh2" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.383976 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95rwl\" (UniqueName: \"kubernetes.io/projected/bfdb06b1-1cad-4645-bf9c-1859648637ea-kube-api-access-95rwl\") pod \"auto-csr-approver-29555438-86rh2\" (UID: \"bfdb06b1-1cad-4645-bf9c-1859648637ea\") " pod="openshift-infra/auto-csr-approver-29555438-86rh2" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.405238 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95rwl\" (UniqueName: \"kubernetes.io/projected/bfdb06b1-1cad-4645-bf9c-1859648637ea-kube-api-access-95rwl\") pod \"auto-csr-approver-29555438-86rh2\" (UID: \"bfdb06b1-1cad-4645-bf9c-1859648637ea\") " pod="openshift-infra/auto-csr-approver-29555438-86rh2" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.474919 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555438-86rh2" Mar 12 14:38:00 crc kubenswrapper[4778]: I0312 14:38:00.912042 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555438-86rh2"] Mar 12 14:38:01 crc kubenswrapper[4778]: I0312 14:38:01.229196 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555438-86rh2" event={"ID":"bfdb06b1-1cad-4645-bf9c-1859648637ea","Type":"ContainerStarted","Data":"10c95db7de2a932f9562c52101bf7b1031b2e444269669dbb369190eab1fb105"} Mar 12 14:38:03 crc kubenswrapper[4778]: I0312 14:38:03.248559 4778 generic.go:334] "Generic (PLEG): container finished" podID="bfdb06b1-1cad-4645-bf9c-1859648637ea" containerID="a83e8d7acc08020b09631971d619c027f357278529f43520703cd7515d291f17" exitCode=0 Mar 12 14:38:03 crc kubenswrapper[4778]: I0312 14:38:03.248672 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555438-86rh2" event={"ID":"bfdb06b1-1cad-4645-bf9c-1859648637ea","Type":"ContainerDied","Data":"a83e8d7acc08020b09631971d619c027f357278529f43520703cd7515d291f17"} Mar 12 14:38:04 crc kubenswrapper[4778]: I0312 14:38:04.705687 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555438-86rh2" Mar 12 14:38:04 crc kubenswrapper[4778]: I0312 14:38:04.867290 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95rwl\" (UniqueName: \"kubernetes.io/projected/bfdb06b1-1cad-4645-bf9c-1859648637ea-kube-api-access-95rwl\") pod \"bfdb06b1-1cad-4645-bf9c-1859648637ea\" (UID: \"bfdb06b1-1cad-4645-bf9c-1859648637ea\") " Mar 12 14:38:04 crc kubenswrapper[4778]: I0312 14:38:04.874053 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdb06b1-1cad-4645-bf9c-1859648637ea-kube-api-access-95rwl" (OuterVolumeSpecName: "kube-api-access-95rwl") pod "bfdb06b1-1cad-4645-bf9c-1859648637ea" (UID: "bfdb06b1-1cad-4645-bf9c-1859648637ea"). InnerVolumeSpecName "kube-api-access-95rwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:38:04 crc kubenswrapper[4778]: I0312 14:38:04.970541 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95rwl\" (UniqueName: \"kubernetes.io/projected/bfdb06b1-1cad-4645-bf9c-1859648637ea-kube-api-access-95rwl\") on node \"crc\" DevicePath \"\"" Mar 12 14:38:05 crc kubenswrapper[4778]: I0312 14:38:05.267092 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555438-86rh2" event={"ID":"bfdb06b1-1cad-4645-bf9c-1859648637ea","Type":"ContainerDied","Data":"10c95db7de2a932f9562c52101bf7b1031b2e444269669dbb369190eab1fb105"} Mar 12 14:38:05 crc kubenswrapper[4778]: I0312 14:38:05.267146 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c95db7de2a932f9562c52101bf7b1031b2e444269669dbb369190eab1fb105" Mar 12 14:38:05 crc kubenswrapper[4778]: I0312 14:38:05.267169 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555438-86rh2" Mar 12 14:38:05 crc kubenswrapper[4778]: I0312 14:38:05.787761 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555432-94dlz"] Mar 12 14:38:05 crc kubenswrapper[4778]: I0312 14:38:05.795770 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555432-94dlz"] Mar 12 14:38:06 crc kubenswrapper[4778]: I0312 14:38:06.268283 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6391c3-533c-4b44-b1be-2a5c9752ba4b" path="/var/lib/kubelet/pods/4b6391c3-533c-4b44-b1be-2a5c9752ba4b/volumes" Mar 12 14:38:15 crc kubenswrapper[4778]: I0312 14:38:15.255212 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:38:15 crc kubenswrapper[4778]: E0312 14:38:15.256259 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:38:23 crc kubenswrapper[4778]: I0312 14:38:23.061501 4778 scope.go:117] "RemoveContainer" containerID="f68c8ed6b7c2e6259023580b179d97b5ef4d89ae76842473f005cc28f0933cea" Mar 12 14:38:30 crc kubenswrapper[4778]: I0312 14:38:30.254815 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:38:30 crc kubenswrapper[4778]: E0312 14:38:30.255789 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:38:43 crc kubenswrapper[4778]: I0312 14:38:43.254087 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:38:43 crc kubenswrapper[4778]: E0312 14:38:43.254769 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:38:54 crc kubenswrapper[4778]: I0312 14:38:54.254005 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:38:54 crc kubenswrapper[4778]: E0312 14:38:54.254722 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:39:09 crc kubenswrapper[4778]: I0312 14:39:09.254246 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:39:09 crc kubenswrapper[4778]: I0312 14:39:09.859438 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"7db21caa41aaa05f213157a5fdd43b948f849acb385674235e08738a115a03fb"} Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.146030 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555440-prm5s"] Mar 12 14:40:00 crc kubenswrapper[4778]: E0312 14:40:00.147699 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfdb06b1-1cad-4645-bf9c-1859648637ea" containerName="oc" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.147732 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfdb06b1-1cad-4645-bf9c-1859648637ea" containerName="oc" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.148273 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfdb06b1-1cad-4645-bf9c-1859648637ea" containerName="oc" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.149818 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555440-prm5s" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.152025 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.152703 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.152899 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.162493 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555440-prm5s"] Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.285980 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjbr\" (UniqueName: \"kubernetes.io/projected/19be98d0-a8c0-4e30-926a-3ac799c6b576-kube-api-access-fxjbr\") pod \"auto-csr-approver-29555440-prm5s\" (UID: \"19be98d0-a8c0-4e30-926a-3ac799c6b576\") " pod="openshift-infra/auto-csr-approver-29555440-prm5s" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.388415 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjbr\" (UniqueName: \"kubernetes.io/projected/19be98d0-a8c0-4e30-926a-3ac799c6b576-kube-api-access-fxjbr\") pod \"auto-csr-approver-29555440-prm5s\" (UID: \"19be98d0-a8c0-4e30-926a-3ac799c6b576\") " pod="openshift-infra/auto-csr-approver-29555440-prm5s" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.409346 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjbr\" (UniqueName: \"kubernetes.io/projected/19be98d0-a8c0-4e30-926a-3ac799c6b576-kube-api-access-fxjbr\") pod \"auto-csr-approver-29555440-prm5s\" (UID: \"19be98d0-a8c0-4e30-926a-3ac799c6b576\") " pod="openshift-infra/auto-csr-approver-29555440-prm5s" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.476936 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555440-prm5s" Mar 12 14:40:00 crc kubenswrapper[4778]: I0312 14:40:00.914444 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555440-prm5s"] Mar 12 14:40:01 crc kubenswrapper[4778]: I0312 14:40:01.682222 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555440-prm5s" event={"ID":"19be98d0-a8c0-4e30-926a-3ac799c6b576","Type":"ContainerStarted","Data":"7aa5a18892e344dd85e64525777060db1300a26b3b462e75255e52fe17efa0d8"} Mar 12 14:40:02 crc kubenswrapper[4778]: I0312 14:40:02.692956 4778 generic.go:334] "Generic (PLEG): container finished" podID="19be98d0-a8c0-4e30-926a-3ac799c6b576" containerID="e11c1ed077d226d809927c1b3f1c2bde51c60c0a0093049ef2bd6c0eb78accb3" exitCode=0 Mar 12 14:40:02 crc kubenswrapper[4778]: I0312 14:40:02.693236 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555440-prm5s" event={"ID":"19be98d0-a8c0-4e30-926a-3ac799c6b576","Type":"ContainerDied","Data":"e11c1ed077d226d809927c1b3f1c2bde51c60c0a0093049ef2bd6c0eb78accb3"} Mar 12 14:40:04 crc kubenswrapper[4778]: I0312 14:40:04.162754 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555440-prm5s" Mar 12 14:40:04 crc kubenswrapper[4778]: I0312 14:40:04.284364 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxjbr\" (UniqueName: \"kubernetes.io/projected/19be98d0-a8c0-4e30-926a-3ac799c6b576-kube-api-access-fxjbr\") pod \"19be98d0-a8c0-4e30-926a-3ac799c6b576\" (UID: \"19be98d0-a8c0-4e30-926a-3ac799c6b576\") " Mar 12 14:40:04 crc kubenswrapper[4778]: I0312 14:40:04.290860 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19be98d0-a8c0-4e30-926a-3ac799c6b576-kube-api-access-fxjbr" (OuterVolumeSpecName: "kube-api-access-fxjbr") pod "19be98d0-a8c0-4e30-926a-3ac799c6b576" (UID: "19be98d0-a8c0-4e30-926a-3ac799c6b576"). InnerVolumeSpecName "kube-api-access-fxjbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:40:04 crc kubenswrapper[4778]: I0312 14:40:04.386864 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxjbr\" (UniqueName: \"kubernetes.io/projected/19be98d0-a8c0-4e30-926a-3ac799c6b576-kube-api-access-fxjbr\") on node \"crc\" DevicePath \"\"" Mar 12 14:40:04 crc kubenswrapper[4778]: I0312 14:40:04.712671 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555440-prm5s" event={"ID":"19be98d0-a8c0-4e30-926a-3ac799c6b576","Type":"ContainerDied","Data":"7aa5a18892e344dd85e64525777060db1300a26b3b462e75255e52fe17efa0d8"} Mar 12 14:40:04 crc kubenswrapper[4778]: I0312 14:40:04.712710 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa5a18892e344dd85e64525777060db1300a26b3b462e75255e52fe17efa0d8" Mar 12 14:40:04 crc kubenswrapper[4778]: I0312 14:40:04.712724 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555440-prm5s" Mar 12 14:40:05 crc kubenswrapper[4778]: I0312 14:40:05.237874 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555434-hzddc"] Mar 12 14:40:05 crc kubenswrapper[4778]: I0312 14:40:05.246651 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555434-hzddc"] Mar 12 14:40:06 crc kubenswrapper[4778]: I0312 14:40:06.264959 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7855ae5-9f57-4d62-ab01-d16ae9f5a037" path="/var/lib/kubelet/pods/e7855ae5-9f57-4d62-ab01-d16ae9f5a037/volumes" Mar 12 14:40:23 crc kubenswrapper[4778]: I0312 14:40:23.175875 4778 scope.go:117] "RemoveContainer" containerID="ecf6cfdc210df01866b5bda8e874db3a9407a84531517ea05fb802b9d57bcdb0" Mar 12 14:41:28 crc kubenswrapper[4778]: I0312 14:41:28.557636 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:41:28 crc kubenswrapper[4778]: I0312 14:41:28.558286 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.382918 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ghxk4"] Mar 12 14:41:41 crc kubenswrapper[4778]: E0312 14:41:41.384119 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19be98d0-a8c0-4e30-926a-3ac799c6b576" containerName="oc" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.384243 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="19be98d0-a8c0-4e30-926a-3ac799c6b576" containerName="oc" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.384510 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="19be98d0-a8c0-4e30-926a-3ac799c6b576" containerName="oc" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.386228 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.400976 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghxk4"] Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.521978 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-utilities\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.522066 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrj9\" (UniqueName: \"kubernetes.io/projected/08484b9b-c7e4-4119-b306-ba766bfdab7c-kube-api-access-pdrj9\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.522173 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-catalog-content\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.624824 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-catalog-content\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.624927 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-utilities\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.624990 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrj9\" (UniqueName: \"kubernetes.io/projected/08484b9b-c7e4-4119-b306-ba766bfdab7c-kube-api-access-pdrj9\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.625525 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-catalog-content\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.625568 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-utilities\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.652962 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrj9\" (UniqueName: \"kubernetes.io/projected/08484b9b-c7e4-4119-b306-ba766bfdab7c-kube-api-access-pdrj9\") pod \"redhat-marketplace-ghxk4\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:41 crc kubenswrapper[4778]: I0312 14:41:41.715818 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:42 crc kubenswrapper[4778]: I0312 14:41:42.247584 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghxk4"] Mar 12 14:41:43 crc kubenswrapper[4778]: I0312 14:41:43.212291 4778 generic.go:334] "Generic (PLEG): container finished" podID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerID="fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0" exitCode=0 Mar 12 14:41:43 crc kubenswrapper[4778]: I0312 14:41:43.212395 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghxk4" event={"ID":"08484b9b-c7e4-4119-b306-ba766bfdab7c","Type":"ContainerDied","Data":"fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0"} Mar 12 14:41:43 crc kubenswrapper[4778]: I0312 14:41:43.212664 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghxk4" event={"ID":"08484b9b-c7e4-4119-b306-ba766bfdab7c","Type":"ContainerStarted","Data":"257844a9141576a3172cff81ed4e5381828b5b4bc86357e1a4f01ef234ab7c96"} Mar 12 14:41:43 crc kubenswrapper[4778]: I0312 14:41:43.215368 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:41:44 crc kubenswrapper[4778]: I0312 14:41:44.224938 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghxk4" event={"ID":"08484b9b-c7e4-4119-b306-ba766bfdab7c","Type":"ContainerStarted","Data":"9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff"} Mar 12 14:41:45 crc kubenswrapper[4778]: E0312 14:41:45.112268 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08484b9b_c7e4_4119_b306_ba766bfdab7c.slice/crio-conmon-9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08484b9b_c7e4_4119_b306_ba766bfdab7c.slice/crio-9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:41:45 crc kubenswrapper[4778]: I0312 14:41:45.235099 4778 generic.go:334] "Generic (PLEG): container finished" podID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerID="9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff" exitCode=0 Mar 12 14:41:45 crc kubenswrapper[4778]: I0312 14:41:45.235156 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghxk4" event={"ID":"08484b9b-c7e4-4119-b306-ba766bfdab7c","Type":"ContainerDied","Data":"9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff"} Mar 12 14:41:46 crc kubenswrapper[4778]: I0312 14:41:46.243978 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghxk4" event={"ID":"08484b9b-c7e4-4119-b306-ba766bfdab7c","Type":"ContainerStarted","Data":"63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763"} Mar 12 14:41:46 crc kubenswrapper[4778]: I0312 14:41:46.259625 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ghxk4" podStartSLOduration=2.595066854 podStartE2EDuration="5.259608188s" podCreationTimestamp="2026-03-12 14:41:41 +0000 UTC" firstStartedPulling="2026-03-12 14:41:43.215071249 +0000 UTC m=+5521.663766645" lastFinishedPulling="2026-03-12 14:41:45.879612563 +0000 UTC m=+5524.328307979" observedRunningTime="2026-03-12 14:41:46.25895518 +0000 UTC m=+5524.707650576" watchObservedRunningTime="2026-03-12 14:41:46.259608188 +0000 UTC m=+5524.708303584" Mar 12 14:41:51 crc kubenswrapper[4778]: I0312 14:41:51.716488 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:51 crc kubenswrapper[4778]: I0312 14:41:51.718381 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:51 crc kubenswrapper[4778]: I0312 14:41:51.773746 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:52 crc kubenswrapper[4778]: I0312 14:41:52.356705 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:52 crc kubenswrapper[4778]: I0312 14:41:52.401252 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghxk4"] Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.317413 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ghxk4" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerName="registry-server" containerID="cri-o://63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763" gracePeriod=2 Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.753245 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.787609 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-utilities\") pod \"08484b9b-c7e4-4119-b306-ba766bfdab7c\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.787697 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdrj9\" (UniqueName: \"kubernetes.io/projected/08484b9b-c7e4-4119-b306-ba766bfdab7c-kube-api-access-pdrj9\") pod \"08484b9b-c7e4-4119-b306-ba766bfdab7c\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.787766 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-catalog-content\") pod \"08484b9b-c7e4-4119-b306-ba766bfdab7c\" (UID: \"08484b9b-c7e4-4119-b306-ba766bfdab7c\") " Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.794116 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-utilities" (OuterVolumeSpecName: "utilities") pod "08484b9b-c7e4-4119-b306-ba766bfdab7c" (UID: "08484b9b-c7e4-4119-b306-ba766bfdab7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.805485 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08484b9b-c7e4-4119-b306-ba766bfdab7c-kube-api-access-pdrj9" (OuterVolumeSpecName: "kube-api-access-pdrj9") pod "08484b9b-c7e4-4119-b306-ba766bfdab7c" (UID: "08484b9b-c7e4-4119-b306-ba766bfdab7c"). InnerVolumeSpecName "kube-api-access-pdrj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.848586 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08484b9b-c7e4-4119-b306-ba766bfdab7c" (UID: "08484b9b-c7e4-4119-b306-ba766bfdab7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.890767 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.890802 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdrj9\" (UniqueName: \"kubernetes.io/projected/08484b9b-c7e4-4119-b306-ba766bfdab7c-kube-api-access-pdrj9\") on node \"crc\" DevicePath \"\"" Mar 12 14:41:54 crc kubenswrapper[4778]: I0312 14:41:54.890816 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08484b9b-c7e4-4119-b306-ba766bfdab7c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.328472 4778 generic.go:334] "Generic (PLEG): container finished" podID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerID="63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763" exitCode=0 Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.328516 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghxk4" event={"ID":"08484b9b-c7e4-4119-b306-ba766bfdab7c","Type":"ContainerDied","Data":"63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763"} Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.328540 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghxk4" event={"ID":"08484b9b-c7e4-4119-b306-ba766bfdab7c","Type":"ContainerDied","Data":"257844a9141576a3172cff81ed4e5381828b5b4bc86357e1a4f01ef234ab7c96"} Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.328554 4778 scope.go:117] "RemoveContainer" containerID="63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.328670 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghxk4" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.350026 4778 scope.go:117] "RemoveContainer" containerID="9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.365577 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghxk4"] Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.374490 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghxk4"] Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.399821 4778 scope.go:117] "RemoveContainer" containerID="fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.426544 4778 scope.go:117] "RemoveContainer" containerID="63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763" Mar 12 14:41:55 crc kubenswrapper[4778]: E0312 14:41:55.426764 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763\": container with ID starting with 63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763 not found: ID does not exist" containerID="63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.426795 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763"} err="failed to get container status \"63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763\": rpc error: code = NotFound desc = could not find container \"63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763\": container with ID starting with 63ec0182dbfdb3076f9264015703df928906cfd80a27db11b18bb38d83978763 not found: ID does not exist" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.426817 4778 scope.go:117] "RemoveContainer" containerID="9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff" Mar 12 14:41:55 crc kubenswrapper[4778]: E0312 14:41:55.426961 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff\": container with ID starting with 9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff not found: ID does not exist" containerID="9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.426982 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff"} err="failed to get container status \"9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff\": rpc error: code = NotFound desc = could not find container \"9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff\": container with ID starting with 9cc32ffde6bb29a0fb51938eff56d0c36bc993785d3b7a77680e754bca9602ff not found: ID does not exist" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.426992 4778 scope.go:117] "RemoveContainer" containerID="fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0" Mar 12 14:41:55 crc kubenswrapper[4778]: E0312 14:41:55.427205 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0\": container with ID starting with fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0 not found: ID does not exist" containerID="fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0" Mar 12 14:41:55 crc kubenswrapper[4778]: I0312 14:41:55.427222 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0"} err="failed to get container status \"fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0\": rpc error: code = NotFound desc = could not find container \"fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0\": container with ID starting with fbeb0cd20ee047ff32cbe65ff8cd88b9cd6a713b1ac825933fe8d93c01221de0 not found: ID does not exist" Mar 12 14:41:56 crc kubenswrapper[4778]: I0312 14:41:56.264121 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" path="/var/lib/kubelet/pods/08484b9b-c7e4-4119-b306-ba766bfdab7c/volumes" Mar 12 14:41:58 crc kubenswrapper[4778]: I0312 14:41:58.558123 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:41:58 crc kubenswrapper[4778]: I0312 14:41:58.558467 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.146166 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555442-c58f5"] Mar 12 14:42:00 crc kubenswrapper[4778]: E0312 14:42:00.147031 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerName="extract-utilities" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.147050 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerName="extract-utilities" Mar 12 14:42:00 crc kubenswrapper[4778]: E0312 14:42:00.147087 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerName="registry-server" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.147095 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerName="registry-server" Mar 12 14:42:00 crc kubenswrapper[4778]: E0312 14:42:00.147131 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerName="extract-content" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.147138 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerName="extract-content" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.147383 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="08484b9b-c7e4-4119-b306-ba766bfdab7c" containerName="registry-server" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.148199 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555442-c58f5" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.153475 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.153686 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.153792 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.171543 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555442-c58f5"] Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.192853 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8b6\" (UniqueName: \"kubernetes.io/projected/86de30c2-8699-4966-8c1b-da67cdacae42-kube-api-access-dp8b6\") pod \"auto-csr-approver-29555442-c58f5\" (UID: \"86de30c2-8699-4966-8c1b-da67cdacae42\") " pod="openshift-infra/auto-csr-approver-29555442-c58f5" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.295046 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp8b6\" (UniqueName: \"kubernetes.io/projected/86de30c2-8699-4966-8c1b-da67cdacae42-kube-api-access-dp8b6\") pod \"auto-csr-approver-29555442-c58f5\" (UID: \"86de30c2-8699-4966-8c1b-da67cdacae42\") " pod="openshift-infra/auto-csr-approver-29555442-c58f5" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.317395 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp8b6\" (UniqueName: \"kubernetes.io/projected/86de30c2-8699-4966-8c1b-da67cdacae42-kube-api-access-dp8b6\") pod \"auto-csr-approver-29555442-c58f5\" (UID: \"86de30c2-8699-4966-8c1b-da67cdacae42\") " pod="openshift-infra/auto-csr-approver-29555442-c58f5" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.469410 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555442-c58f5" Mar 12 14:42:00 crc kubenswrapper[4778]: I0312 14:42:00.930593 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555442-c58f5"] Mar 12 14:42:01 crc kubenswrapper[4778]: I0312 14:42:01.375160 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555442-c58f5" event={"ID":"86de30c2-8699-4966-8c1b-da67cdacae42","Type":"ContainerStarted","Data":"e9ca4027e65d1eece4dde85d28cad4df3531606fa0be1367675a67dabdc4867a"} Mar 12 14:42:03 crc kubenswrapper[4778]: I0312 14:42:03.395896 4778 generic.go:334] "Generic (PLEG): container finished" podID="86de30c2-8699-4966-8c1b-da67cdacae42" containerID="629d070304f0ca91f60ef09f2871ae160406fbd39c685129feb137e8e63e7888" exitCode=0 Mar 12 14:42:03 crc kubenswrapper[4778]: I0312 14:42:03.395934 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555442-c58f5" event={"ID":"86de30c2-8699-4966-8c1b-da67cdacae42","Type":"ContainerDied","Data":"629d070304f0ca91f60ef09f2871ae160406fbd39c685129feb137e8e63e7888"} Mar 12 14:42:04 crc kubenswrapper[4778]: I0312 14:42:04.735378 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555442-c58f5" Mar 12 14:42:04 crc kubenswrapper[4778]: I0312 14:42:04.785241 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp8b6\" (UniqueName: \"kubernetes.io/projected/86de30c2-8699-4966-8c1b-da67cdacae42-kube-api-access-dp8b6\") pod \"86de30c2-8699-4966-8c1b-da67cdacae42\" (UID: \"86de30c2-8699-4966-8c1b-da67cdacae42\") " Mar 12 14:42:04 crc kubenswrapper[4778]: I0312 14:42:04.795117 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86de30c2-8699-4966-8c1b-da67cdacae42-kube-api-access-dp8b6" (OuterVolumeSpecName: "kube-api-access-dp8b6") pod "86de30c2-8699-4966-8c1b-da67cdacae42" (UID: "86de30c2-8699-4966-8c1b-da67cdacae42"). InnerVolumeSpecName "kube-api-access-dp8b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:42:04 crc kubenswrapper[4778]: I0312 14:42:04.889670 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp8b6\" (UniqueName: \"kubernetes.io/projected/86de30c2-8699-4966-8c1b-da67cdacae42-kube-api-access-dp8b6\") on node \"crc\" DevicePath \"\"" Mar 12 14:42:05 crc kubenswrapper[4778]: I0312 14:42:05.414472 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555442-c58f5" event={"ID":"86de30c2-8699-4966-8c1b-da67cdacae42","Type":"ContainerDied","Data":"e9ca4027e65d1eece4dde85d28cad4df3531606fa0be1367675a67dabdc4867a"} Mar 12 14:42:05 crc kubenswrapper[4778]: I0312 14:42:05.414511 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9ca4027e65d1eece4dde85d28cad4df3531606fa0be1367675a67dabdc4867a" Mar 12 14:42:05 crc kubenswrapper[4778]: I0312 14:42:05.414565 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555442-c58f5" Mar 12 14:42:05 crc kubenswrapper[4778]: E0312 14:42:05.601954 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86de30c2_8699_4966_8c1b_da67cdacae42.slice/crio-e9ca4027e65d1eece4dde85d28cad4df3531606fa0be1367675a67dabdc4867a\": RecentStats: unable to find data in memory cache]" Mar 12 14:42:05 crc kubenswrapper[4778]: I0312 14:42:05.859197 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555436-9bs4m"] Mar 12 14:42:05 crc kubenswrapper[4778]: I0312 14:42:05.878897 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555436-9bs4m"] Mar 12 14:42:06 crc kubenswrapper[4778]: I0312 14:42:06.266253 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d1f962-b71e-4473-b387-137a395e1a39" path="/var/lib/kubelet/pods/45d1f962-b71e-4473-b387-137a395e1a39/volumes" Mar 12 14:42:23 crc kubenswrapper[4778]: I0312 14:42:23.270420 4778 scope.go:117] "RemoveContainer" containerID="b4d039fad9b993f652c5f6f0f661f085d4f93b47467c47a0fe13959b9f367b5d" Mar 12 14:42:28 crc kubenswrapper[4778]: I0312 14:42:28.557810 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:42:28 crc kubenswrapper[4778]: I0312 14:42:28.559510 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:42:28 crc kubenswrapper[4778]: I0312 14:42:28.559684 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:42:28 crc kubenswrapper[4778]: I0312 14:42:28.560604 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7db21caa41aaa05f213157a5fdd43b948f849acb385674235e08738a115a03fb"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:42:28 crc kubenswrapper[4778]: I0312 14:42:28.560751 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://7db21caa41aaa05f213157a5fdd43b948f849acb385674235e08738a115a03fb" gracePeriod=600 Mar 12 14:42:29 crc kubenswrapper[4778]: I0312 14:42:29.047329 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="7db21caa41aaa05f213157a5fdd43b948f849acb385674235e08738a115a03fb" exitCode=0 Mar 12 14:42:29 crc kubenswrapper[4778]: I0312 14:42:29.047826 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"7db21caa41aaa05f213157a5fdd43b948f849acb385674235e08738a115a03fb"} Mar 12 14:42:29 crc kubenswrapper[4778]: I0312 14:42:29.047855 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4"} Mar 12 14:42:29 crc kubenswrapper[4778]: I0312 14:42:29.047872 4778 scope.go:117] "RemoveContainer" containerID="bbb7cd318ed9aaf8c81b44eaf9e441283227b353d96ce94d2989c6c892e1351c" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.169868 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mqtgd"] Mar 12 14:42:44 crc kubenswrapper[4778]: E0312 14:42:44.171392 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86de30c2-8699-4966-8c1b-da67cdacae42" containerName="oc" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.171414 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="86de30c2-8699-4966-8c1b-da67cdacae42" containerName="oc" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.171750 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="86de30c2-8699-4966-8c1b-da67cdacae42" containerName="oc" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.173520 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.185736 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqtgd"] Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.221927 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-utilities\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.221998 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-catalog-content\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.222288 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86g7c\" (UniqueName: \"kubernetes.io/projected/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-kube-api-access-86g7c\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.324841 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86g7c\" (UniqueName: \"kubernetes.io/projected/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-kube-api-access-86g7c\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.325048 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-utilities\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.325077 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-catalog-content\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.325996 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-catalog-content\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.326074 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-utilities\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.356068 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86g7c\" (UniqueName: \"kubernetes.io/projected/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-kube-api-access-86g7c\") pod \"redhat-operators-mqtgd\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:44 crc kubenswrapper[4778]: I0312 14:42:44.493691 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:45 crc kubenswrapper[4778]: I0312 14:42:44.998782 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mqtgd"] Mar 12 14:42:45 crc kubenswrapper[4778]: I0312 14:42:45.187242 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtgd" event={"ID":"bcd4d515-136b-47e8-92ba-9e79ed98e8ec","Type":"ContainerStarted","Data":"f6a4fb35fc6694b18950498d5d09e6c3816748fe0f6054bbd9a78d05234e573b"} Mar 12 14:42:46 crc kubenswrapper[4778]: I0312 14:42:46.196727 4778 generic.go:334] "Generic (PLEG): container finished" podID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerID="58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad" exitCode=0 Mar 12 14:42:46 crc kubenswrapper[4778]: I0312 14:42:46.196773 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtgd" event={"ID":"bcd4d515-136b-47e8-92ba-9e79ed98e8ec","Type":"ContainerDied","Data":"58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad"} Mar 12 14:42:47 crc kubenswrapper[4778]: I0312 14:42:47.210090 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtgd" event={"ID":"bcd4d515-136b-47e8-92ba-9e79ed98e8ec","Type":"ContainerStarted","Data":"cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623"} Mar 12 14:42:52 crc kubenswrapper[4778]: I0312 14:42:52.265452 4778 generic.go:334] "Generic (PLEG): container finished" podID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerID="cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623" exitCode=0 Mar 12 14:42:52 crc kubenswrapper[4778]: I0312 14:42:52.267610 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtgd" event={"ID":"bcd4d515-136b-47e8-92ba-9e79ed98e8ec","Type":"ContainerDied","Data":"cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623"} Mar 12 14:42:53 crc kubenswrapper[4778]: I0312 14:42:53.280573 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtgd" event={"ID":"bcd4d515-136b-47e8-92ba-9e79ed98e8ec","Type":"ContainerStarted","Data":"dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631"} Mar 12 14:42:53 crc kubenswrapper[4778]: I0312 14:42:53.321827 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mqtgd" podStartSLOduration=2.763196979 podStartE2EDuration="9.321802669s" podCreationTimestamp="2026-03-12 14:42:44 +0000 UTC" firstStartedPulling="2026-03-12 14:42:46.19911342 +0000 UTC m=+5584.647808816" lastFinishedPulling="2026-03-12 14:42:52.75771907 +0000 UTC m=+5591.206414506" observedRunningTime="2026-03-12 14:42:53.307254396 +0000 UTC m=+5591.755949842" watchObservedRunningTime="2026-03-12 14:42:53.321802669 +0000 UTC m=+5591.770498075" Mar 12 14:42:54 crc kubenswrapper[4778]: I0312 14:42:54.494234 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:54 crc kubenswrapper[4778]: I0312 14:42:54.494561 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:42:55 crc kubenswrapper[4778]: I0312 14:42:55.553774 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mqtgd" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="registry-server" probeResult="failure" output=< Mar 12 14:42:55 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:42:55 crc kubenswrapper[4778]: > Mar 12 14:43:04 crc kubenswrapper[4778]: I0312 14:43:04.543991 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:43:04 crc kubenswrapper[4778]: I0312 14:43:04.591705 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:43:04 crc kubenswrapper[4778]: I0312 14:43:04.784222 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqtgd"] Mar 12 14:43:06 crc kubenswrapper[4778]: I0312 14:43:06.395918 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mqtgd" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="registry-server" containerID="cri-o://dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631" gracePeriod=2 Mar 12 14:43:06 crc kubenswrapper[4778]: I0312 14:43:06.942101 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.010239 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-catalog-content\") pod \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.010531 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-utilities\") pod \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.010654 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86g7c\" (UniqueName: \"kubernetes.io/projected/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-kube-api-access-86g7c\") pod \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\" (UID: \"bcd4d515-136b-47e8-92ba-9e79ed98e8ec\") " Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.012351 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-utilities" (OuterVolumeSpecName: "utilities") pod "bcd4d515-136b-47e8-92ba-9e79ed98e8ec" (UID: "bcd4d515-136b-47e8-92ba-9e79ed98e8ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.021327 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-kube-api-access-86g7c" (OuterVolumeSpecName: "kube-api-access-86g7c") pod "bcd4d515-136b-47e8-92ba-9e79ed98e8ec" (UID: "bcd4d515-136b-47e8-92ba-9e79ed98e8ec"). InnerVolumeSpecName "kube-api-access-86g7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.113018 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.113055 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86g7c\" (UniqueName: \"kubernetes.io/projected/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-kube-api-access-86g7c\") on node \"crc\" DevicePath \"\"" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.172963 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcd4d515-136b-47e8-92ba-9e79ed98e8ec" (UID: "bcd4d515-136b-47e8-92ba-9e79ed98e8ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.214870 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcd4d515-136b-47e8-92ba-9e79ed98e8ec-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.405559 4778 generic.go:334] "Generic (PLEG): container finished" podID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerID="dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631" exitCode=0 Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.405619 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtgd" event={"ID":"bcd4d515-136b-47e8-92ba-9e79ed98e8ec","Type":"ContainerDied","Data":"dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631"} Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.405645 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mqtgd" event={"ID":"bcd4d515-136b-47e8-92ba-9e79ed98e8ec","Type":"ContainerDied","Data":"f6a4fb35fc6694b18950498d5d09e6c3816748fe0f6054bbd9a78d05234e573b"} Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.405662 4778 scope.go:117] "RemoveContainer" containerID="dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.405624 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mqtgd" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.433915 4778 scope.go:117] "RemoveContainer" containerID="cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.466777 4778 scope.go:117] "RemoveContainer" containerID="58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.468960 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mqtgd"] Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.480819 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mqtgd"] Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.520792 4778 scope.go:117] "RemoveContainer" containerID="dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631" Mar 12 14:43:07 crc kubenswrapper[4778]: E0312 14:43:07.521277 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631\": container with ID starting with dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631 not found: ID does not exist" containerID="dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.521405 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631"} err="failed to get container status \"dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631\": rpc error: code = NotFound desc = could not find container \"dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631\": container with ID starting with dacb248b27337d412ad2a8d93ae1a608ba1411ee25700ca3e1e7a3f5ff29d631 not found: ID does not exist" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.521529 4778 scope.go:117] "RemoveContainer" containerID="cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623" Mar 12 14:43:07 crc kubenswrapper[4778]: E0312 14:43:07.521953 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623\": container with ID starting with cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623 not found: ID does not exist" containerID="cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.521988 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623"} err="failed to get container status \"cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623\": rpc error: code = NotFound desc = could not find container \"cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623\": container with ID starting with cbd788e3627ea03805a19638b323850395483e01b39b06317c94c025d0b17623 not found: ID does not exist" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.522011 4778 scope.go:117] "RemoveContainer" containerID="58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad" Mar 12 14:43:07 crc kubenswrapper[4778]: E0312 14:43:07.522236 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad\": container with ID starting with 58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad not found: ID does not exist" containerID="58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad" Mar 12 14:43:07 crc kubenswrapper[4778]: I0312 14:43:07.522267 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad"} err="failed to get container status \"58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad\": rpc error: code = NotFound desc = could not find container \"58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad\": container with ID starting with 58941a5bd25d7707244796e0457ad9e6c2eecea41a390bebb99271f9e229c8ad not found: ID does not exist" Mar 12 14:43:08 crc kubenswrapper[4778]: I0312 14:43:08.268280 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" path="/var/lib/kubelet/pods/bcd4d515-136b-47e8-92ba-9e79ed98e8ec/volumes" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.152250 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555444-qlvql"] Mar 12 14:44:00 crc kubenswrapper[4778]: E0312 14:44:00.153584 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="extract-content" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.153603 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="extract-content" Mar 12 14:44:00 crc kubenswrapper[4778]: E0312 14:44:00.153627 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="extract-utilities" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.153636 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="extract-utilities" Mar 12 14:44:00 crc kubenswrapper[4778]: E0312 14:44:00.153667 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="registry-server" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.153678 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="registry-server" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.153948 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd4d515-136b-47e8-92ba-9e79ed98e8ec" containerName="registry-server" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.154822 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555444-qlvql" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.158130 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.158607 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.160463 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.161969 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555444-qlvql"] Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.287475 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42fdw\" (UniqueName: \"kubernetes.io/projected/0da8caed-335b-4730-a69b-b724585893f0-kube-api-access-42fdw\") pod \"auto-csr-approver-29555444-qlvql\" (UID: \"0da8caed-335b-4730-a69b-b724585893f0\") " pod="openshift-infra/auto-csr-approver-29555444-qlvql" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.389896 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42fdw\" (UniqueName: \"kubernetes.io/projected/0da8caed-335b-4730-a69b-b724585893f0-kube-api-access-42fdw\") pod \"auto-csr-approver-29555444-qlvql\" (UID: \"0da8caed-335b-4730-a69b-b724585893f0\") " pod="openshift-infra/auto-csr-approver-29555444-qlvql" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.412072 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42fdw\" (UniqueName: \"kubernetes.io/projected/0da8caed-335b-4730-a69b-b724585893f0-kube-api-access-42fdw\") pod \"auto-csr-approver-29555444-qlvql\" (UID: \"0da8caed-335b-4730-a69b-b724585893f0\") " pod="openshift-infra/auto-csr-approver-29555444-qlvql" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.484672 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555444-qlvql" Mar 12 14:44:00 crc kubenswrapper[4778]: I0312 14:44:00.935574 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555444-qlvql"] Mar 12 14:44:01 crc kubenswrapper[4778]: I0312 14:44:01.890270 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555444-qlvql" event={"ID":"0da8caed-335b-4730-a69b-b724585893f0","Type":"ContainerStarted","Data":"b6dbcb7a022a068a1c8c79cecb2f2c3a82311a6d05a961f203bb6f31cc1279bb"} Mar 12 14:44:02 crc kubenswrapper[4778]: I0312 14:44:02.899235 4778 generic.go:334] "Generic (PLEG): container finished" podID="0da8caed-335b-4730-a69b-b724585893f0" containerID="1c045d93602ba35174db559893247fb3da6916ea0a208015b77cd0f47781b091" exitCode=0 Mar 12 14:44:02 crc kubenswrapper[4778]: I0312 14:44:02.899322 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555444-qlvql" event={"ID":"0da8caed-335b-4730-a69b-b724585893f0","Type":"ContainerDied","Data":"1c045d93602ba35174db559893247fb3da6916ea0a208015b77cd0f47781b091"} Mar 12 14:44:04 crc kubenswrapper[4778]: I0312 14:44:04.296437 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555444-qlvql" Mar 12 14:44:04 crc kubenswrapper[4778]: I0312 14:44:04.370878 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42fdw\" (UniqueName: \"kubernetes.io/projected/0da8caed-335b-4730-a69b-b724585893f0-kube-api-access-42fdw\") pod \"0da8caed-335b-4730-a69b-b724585893f0\" (UID: \"0da8caed-335b-4730-a69b-b724585893f0\") " Mar 12 14:44:04 crc kubenswrapper[4778]: I0312 14:44:04.377695 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da8caed-335b-4730-a69b-b724585893f0-kube-api-access-42fdw" (OuterVolumeSpecName: "kube-api-access-42fdw") pod "0da8caed-335b-4730-a69b-b724585893f0" (UID: "0da8caed-335b-4730-a69b-b724585893f0"). InnerVolumeSpecName "kube-api-access-42fdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:44:04 crc kubenswrapper[4778]: I0312 14:44:04.473500 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42fdw\" (UniqueName: \"kubernetes.io/projected/0da8caed-335b-4730-a69b-b724585893f0-kube-api-access-42fdw\") on node \"crc\" DevicePath \"\"" Mar 12 14:44:04 crc kubenswrapper[4778]: I0312 14:44:04.919257 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555444-qlvql" event={"ID":"0da8caed-335b-4730-a69b-b724585893f0","Type":"ContainerDied","Data":"b6dbcb7a022a068a1c8c79cecb2f2c3a82311a6d05a961f203bb6f31cc1279bb"} Mar 12 14:44:04 crc kubenswrapper[4778]: I0312 14:44:04.919297 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6dbcb7a022a068a1c8c79cecb2f2c3a82311a6d05a961f203bb6f31cc1279bb" Mar 12 14:44:04 crc kubenswrapper[4778]: I0312 14:44:04.919581 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555444-qlvql" Mar 12 14:44:05 crc kubenswrapper[4778]: I0312 14:44:05.394523 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555438-86rh2"] Mar 12 14:44:05 crc kubenswrapper[4778]: I0312 14:44:05.406321 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555438-86rh2"] Mar 12 14:44:06 crc kubenswrapper[4778]: I0312 14:44:06.269330 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfdb06b1-1cad-4645-bf9c-1859648637ea" path="/var/lib/kubelet/pods/bfdb06b1-1cad-4645-bf9c-1859648637ea/volumes" Mar 12 14:44:23 crc kubenswrapper[4778]: I0312 14:44:23.414120 4778 scope.go:117] "RemoveContainer" containerID="a83e8d7acc08020b09631971d619c027f357278529f43520703cd7515d291f17" Mar 12 14:44:28 crc kubenswrapper[4778]: I0312 14:44:28.557579 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:44:28 crc kubenswrapper[4778]: I0312 14:44:28.558053 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:44:58 crc kubenswrapper[4778]: I0312 14:44:58.694631 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:44:58 crc kubenswrapper[4778]: I0312 14:44:58.695230 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.173317 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc"] Mar 12 14:45:00 crc kubenswrapper[4778]: E0312 14:45:00.175317 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da8caed-335b-4730-a69b-b724585893f0" containerName="oc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.175430 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da8caed-335b-4730-a69b-b724585893f0" containerName="oc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.175794 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da8caed-335b-4730-a69b-b724585893f0" containerName="oc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.176751 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.179895 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.180301 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.186298 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc"] Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.325740 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a937d3c8-3521-400c-9703-ff806cd36e1f-secret-volume\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.325867 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a937d3c8-3521-400c-9703-ff806cd36e1f-config-volume\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.325982 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtf5n\" (UniqueName: \"kubernetes.io/projected/a937d3c8-3521-400c-9703-ff806cd36e1f-kube-api-access-vtf5n\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.427437 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtf5n\" (UniqueName: \"kubernetes.io/projected/a937d3c8-3521-400c-9703-ff806cd36e1f-kube-api-access-vtf5n\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.427657 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a937d3c8-3521-400c-9703-ff806cd36e1f-secret-volume\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.427699 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a937d3c8-3521-400c-9703-ff806cd36e1f-config-volume\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.428825 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a937d3c8-3521-400c-9703-ff806cd36e1f-config-volume\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.436491 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a937d3c8-3521-400c-9703-ff806cd36e1f-secret-volume\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.444400 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtf5n\" (UniqueName: \"kubernetes.io/projected/a937d3c8-3521-400c-9703-ff806cd36e1f-kube-api-access-vtf5n\") pod \"collect-profiles-29555445-h75tc\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.509076 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:00 crc kubenswrapper[4778]: I0312 14:45:00.990570 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc"] Mar 12 14:45:01 crc kubenswrapper[4778]: I0312 14:45:01.436857 4778 generic.go:334] "Generic (PLEG): container finished" podID="a937d3c8-3521-400c-9703-ff806cd36e1f" containerID="04cbbe7f2d03e6a5c907c38fa94f13fc54432108a3a87359ee7c5a5ffeb55d0e" exitCode=0 Mar 12 14:45:01 crc kubenswrapper[4778]: I0312 14:45:01.436922 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" event={"ID":"a937d3c8-3521-400c-9703-ff806cd36e1f","Type":"ContainerDied","Data":"04cbbe7f2d03e6a5c907c38fa94f13fc54432108a3a87359ee7c5a5ffeb55d0e"} Mar 12 14:45:01 crc kubenswrapper[4778]: I0312 14:45:01.436951 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" event={"ID":"a937d3c8-3521-400c-9703-ff806cd36e1f","Type":"ContainerStarted","Data":"a43137fc72539a88663622748f24aaf1646eda8d5c73c48ba4c842b379097cc1"} Mar 12 14:45:02 crc kubenswrapper[4778]: I0312 14:45:02.833433 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:02 crc kubenswrapper[4778]: I0312 14:45:02.976086 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a937d3c8-3521-400c-9703-ff806cd36e1f-secret-volume\") pod \"a937d3c8-3521-400c-9703-ff806cd36e1f\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " Mar 12 14:45:02 crc kubenswrapper[4778]: I0312 14:45:02.976648 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a937d3c8-3521-400c-9703-ff806cd36e1f-config-volume\") pod \"a937d3c8-3521-400c-9703-ff806cd36e1f\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " Mar 12 14:45:02 crc kubenswrapper[4778]: I0312 14:45:02.977022 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtf5n\" (UniqueName: \"kubernetes.io/projected/a937d3c8-3521-400c-9703-ff806cd36e1f-kube-api-access-vtf5n\") pod \"a937d3c8-3521-400c-9703-ff806cd36e1f\" (UID: \"a937d3c8-3521-400c-9703-ff806cd36e1f\") " Mar 12 14:45:02 crc kubenswrapper[4778]: I0312 14:45:02.977256 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a937d3c8-3521-400c-9703-ff806cd36e1f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a937d3c8-3521-400c-9703-ff806cd36e1f" (UID: "a937d3c8-3521-400c-9703-ff806cd36e1f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 14:45:02 crc kubenswrapper[4778]: I0312 14:45:02.978064 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a937d3c8-3521-400c-9703-ff806cd36e1f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:45:02 crc kubenswrapper[4778]: I0312 14:45:02.982121 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a937d3c8-3521-400c-9703-ff806cd36e1f-kube-api-access-vtf5n" (OuterVolumeSpecName: "kube-api-access-vtf5n") pod "a937d3c8-3521-400c-9703-ff806cd36e1f" (UID: "a937d3c8-3521-400c-9703-ff806cd36e1f"). InnerVolumeSpecName "kube-api-access-vtf5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:45:02 crc kubenswrapper[4778]: I0312 14:45:02.982564 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a937d3c8-3521-400c-9703-ff806cd36e1f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a937d3c8-3521-400c-9703-ff806cd36e1f" (UID: "a937d3c8-3521-400c-9703-ff806cd36e1f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 14:45:03 crc kubenswrapper[4778]: I0312 14:45:03.080379 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a937d3c8-3521-400c-9703-ff806cd36e1f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 14:45:03 crc kubenswrapper[4778]: I0312 14:45:03.080420 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtf5n\" (UniqueName: \"kubernetes.io/projected/a937d3c8-3521-400c-9703-ff806cd36e1f-kube-api-access-vtf5n\") on node \"crc\" DevicePath \"\"" Mar 12 14:45:03 crc kubenswrapper[4778]: I0312 14:45:03.462638 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" event={"ID":"a937d3c8-3521-400c-9703-ff806cd36e1f","Type":"ContainerDied","Data":"a43137fc72539a88663622748f24aaf1646eda8d5c73c48ba4c842b379097cc1"} Mar 12 14:45:03 crc kubenswrapper[4778]: I0312 14:45:03.462695 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a43137fc72539a88663622748f24aaf1646eda8d5c73c48ba4c842b379097cc1" Mar 12 14:45:03 crc kubenswrapper[4778]: I0312 14:45:03.462696 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555445-h75tc" Mar 12 14:45:03 crc kubenswrapper[4778]: I0312 14:45:03.923613 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8"] Mar 12 14:45:03 crc kubenswrapper[4778]: I0312 14:45:03.934424 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555400-lrxd8"] Mar 12 14:45:04 crc kubenswrapper[4778]: I0312 14:45:04.266582 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d85560c-89e4-4723-beb0-aeda87d0791a" path="/var/lib/kubelet/pods/5d85560c-89e4-4723-beb0-aeda87d0791a/volumes" Mar 12 14:45:23 crc kubenswrapper[4778]: I0312 14:45:23.550111 4778 scope.go:117] "RemoveContainer" containerID="96aa4949ff208afd2c193ba8303ee15ee08731bdac5eecb0faaa4ff029a2c93a" Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.557846 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.558551 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.558637 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.559850 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.559972 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" gracePeriod=600 Mar 12 14:45:28 crc kubenswrapper[4778]: E0312 14:45:28.689636 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.732983 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" exitCode=0 Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.733028 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4"} Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.733094 4778 scope.go:117] "RemoveContainer" containerID="7db21caa41aaa05f213157a5fdd43b948f849acb385674235e08738a115a03fb" Mar 12 14:45:28 crc kubenswrapper[4778]: I0312 14:45:28.733622 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:45:28 crc kubenswrapper[4778]: E0312 14:45:28.733907 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:45:43 crc kubenswrapper[4778]: I0312 14:45:43.254564 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:45:43 crc kubenswrapper[4778]: E0312 14:45:43.255823 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:45:54 crc kubenswrapper[4778]: I0312 14:45:54.253784 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:45:54 crc kubenswrapper[4778]: E0312 14:45:54.254615 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.151145 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555446-8n794"] Mar 12 14:46:00 crc kubenswrapper[4778]: E0312 14:46:00.152126 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a937d3c8-3521-400c-9703-ff806cd36e1f" containerName="collect-profiles" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.152141 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="a937d3c8-3521-400c-9703-ff806cd36e1f" containerName="collect-profiles" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.152364 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="a937d3c8-3521-400c-9703-ff806cd36e1f" containerName="collect-profiles" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.152943 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555446-8n794" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.155716 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.158728 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.158855 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.164712 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555446-8n794"] Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.218621 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8dxf\" (UniqueName: \"kubernetes.io/projected/39c23ecc-c75b-450b-a8ff-351acf5384eb-kube-api-access-b8dxf\") pod \"auto-csr-approver-29555446-8n794\" (UID: \"39c23ecc-c75b-450b-a8ff-351acf5384eb\") " pod="openshift-infra/auto-csr-approver-29555446-8n794" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.321175 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8dxf\" (UniqueName: \"kubernetes.io/projected/39c23ecc-c75b-450b-a8ff-351acf5384eb-kube-api-access-b8dxf\") pod \"auto-csr-approver-29555446-8n794\" (UID: \"39c23ecc-c75b-450b-a8ff-351acf5384eb\") " pod="openshift-infra/auto-csr-approver-29555446-8n794" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.348038 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8dxf\" (UniqueName: \"kubernetes.io/projected/39c23ecc-c75b-450b-a8ff-351acf5384eb-kube-api-access-b8dxf\") pod \"auto-csr-approver-29555446-8n794\" (UID: \"39c23ecc-c75b-450b-a8ff-351acf5384eb\") " pod="openshift-infra/auto-csr-approver-29555446-8n794" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.475082 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555446-8n794" Mar 12 14:46:00 crc kubenswrapper[4778]: I0312 14:46:00.925115 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555446-8n794"] Mar 12 14:46:01 crc kubenswrapper[4778]: I0312 14:46:01.055512 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555446-8n794" event={"ID":"39c23ecc-c75b-450b-a8ff-351acf5384eb","Type":"ContainerStarted","Data":"595d06f338bb91adb2e450722d1ef8c57ba66b32da1700432f1c19563c6812ef"} Mar 12 14:46:03 crc kubenswrapper[4778]: I0312 14:46:03.074430 4778 generic.go:334] "Generic (PLEG): container finished" podID="39c23ecc-c75b-450b-a8ff-351acf5384eb" containerID="7fa3212d8016436bf15f0a3b9362ee13f653d69a236beda9f2c5ab4f28324438" exitCode=0 Mar 12 14:46:03 crc kubenswrapper[4778]: I0312 14:46:03.074490 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555446-8n794" event={"ID":"39c23ecc-c75b-450b-a8ff-351acf5384eb","Type":"ContainerDied","Data":"7fa3212d8016436bf15f0a3b9362ee13f653d69a236beda9f2c5ab4f28324438"} Mar 12 14:46:04 crc kubenswrapper[4778]: I0312 14:46:04.465165 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555446-8n794" Mar 12 14:46:04 crc kubenswrapper[4778]: I0312 14:46:04.501086 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8dxf\" (UniqueName: \"kubernetes.io/projected/39c23ecc-c75b-450b-a8ff-351acf5384eb-kube-api-access-b8dxf\") pod \"39c23ecc-c75b-450b-a8ff-351acf5384eb\" (UID: \"39c23ecc-c75b-450b-a8ff-351acf5384eb\") " Mar 12 14:46:04 crc kubenswrapper[4778]: I0312 14:46:04.509627 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c23ecc-c75b-450b-a8ff-351acf5384eb-kube-api-access-b8dxf" (OuterVolumeSpecName: "kube-api-access-b8dxf") pod "39c23ecc-c75b-450b-a8ff-351acf5384eb" (UID: "39c23ecc-c75b-450b-a8ff-351acf5384eb"). InnerVolumeSpecName "kube-api-access-b8dxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:46:04 crc kubenswrapper[4778]: I0312 14:46:04.604117 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8dxf\" (UniqueName: \"kubernetes.io/projected/39c23ecc-c75b-450b-a8ff-351acf5384eb-kube-api-access-b8dxf\") on node \"crc\" DevicePath \"\"" Mar 12 14:46:05 crc kubenswrapper[4778]: I0312 14:46:05.093830 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555446-8n794" event={"ID":"39c23ecc-c75b-450b-a8ff-351acf5384eb","Type":"ContainerDied","Data":"595d06f338bb91adb2e450722d1ef8c57ba66b32da1700432f1c19563c6812ef"} Mar 12 14:46:05 crc kubenswrapper[4778]: I0312 14:46:05.094225 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595d06f338bb91adb2e450722d1ef8c57ba66b32da1700432f1c19563c6812ef" Mar 12 14:46:05 crc kubenswrapper[4778]: I0312 14:46:05.093876 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555446-8n794" Mar 12 14:46:05 crc kubenswrapper[4778]: I0312 14:46:05.556169 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555440-prm5s"] Mar 12 14:46:05 crc kubenswrapper[4778]: I0312 14:46:05.568588 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555440-prm5s"] Mar 12 14:46:06 crc kubenswrapper[4778]: I0312 14:46:06.268839 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19be98d0-a8c0-4e30-926a-3ac799c6b576" path="/var/lib/kubelet/pods/19be98d0-a8c0-4e30-926a-3ac799c6b576/volumes" Mar 12 14:46:07 crc kubenswrapper[4778]: I0312 14:46:07.254068 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:46:07 crc kubenswrapper[4778]: E0312 14:46:07.254629 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:46:22 crc kubenswrapper[4778]: I0312 14:46:22.254529 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:46:22 crc kubenswrapper[4778]: E0312 14:46:22.255420 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:46:23 crc kubenswrapper[4778]: I0312 14:46:23.647211 4778 scope.go:117] "RemoveContainer" containerID="e11c1ed077d226d809927c1b3f1c2bde51c60c0a0093049ef2bd6c0eb78accb3" Mar 12 14:46:36 crc kubenswrapper[4778]: I0312 14:46:36.254825 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:46:36 crc kubenswrapper[4778]: E0312 14:46:36.255608 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.015906 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wd89s"] Mar 12 14:46:38 crc kubenswrapper[4778]: E0312 14:46:38.016576 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c23ecc-c75b-450b-a8ff-351acf5384eb" containerName="oc" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.016592 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c23ecc-c75b-450b-a8ff-351acf5384eb" containerName="oc" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.016823 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c23ecc-c75b-450b-a8ff-351acf5384eb" containerName="oc" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.018617 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.038618 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wd89s"] Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.116101 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-catalog-content\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.116225 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-utilities\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.116440 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt7jp\" (UniqueName: \"kubernetes.io/projected/55405211-a853-42e1-b97d-f876cfe8571e-kube-api-access-tt7jp\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.217738 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-utilities\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.218101 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt7jp\" (UniqueName: \"kubernetes.io/projected/55405211-a853-42e1-b97d-f876cfe8571e-kube-api-access-tt7jp\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.218361 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-catalog-content\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.218446 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-utilities\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.218688 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-catalog-content\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.241165 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt7jp\" (UniqueName: \"kubernetes.io/projected/55405211-a853-42e1-b97d-f876cfe8571e-kube-api-access-tt7jp\") pod \"community-operators-wd89s\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.338787 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:38 crc kubenswrapper[4778]: I0312 14:46:38.892097 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wd89s"] Mar 12 14:46:39 crc kubenswrapper[4778]: I0312 14:46:39.383706 4778 generic.go:334] "Generic (PLEG): container finished" podID="55405211-a853-42e1-b97d-f876cfe8571e" containerID="e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f" exitCode=0 Mar 12 14:46:39 crc kubenswrapper[4778]: I0312 14:46:39.383926 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wd89s" event={"ID":"55405211-a853-42e1-b97d-f876cfe8571e","Type":"ContainerDied","Data":"e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f"} Mar 12 14:46:39 crc kubenswrapper[4778]: I0312 14:46:39.383978 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wd89s" event={"ID":"55405211-a853-42e1-b97d-f876cfe8571e","Type":"ContainerStarted","Data":"e21a0b422cf52b7acf33c1dc0980a93ccaf4f052a0f0d2e75d0729ce68636faa"} Mar 12 14:46:41 crc kubenswrapper[4778]: I0312 14:46:41.402158 4778 generic.go:334] "Generic (PLEG): container finished" podID="55405211-a853-42e1-b97d-f876cfe8571e" containerID="267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d" exitCode=0 Mar 12 14:46:41 crc kubenswrapper[4778]: I0312 14:46:41.402216 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wd89s" event={"ID":"55405211-a853-42e1-b97d-f876cfe8571e","Type":"ContainerDied","Data":"267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d"} Mar 12 14:46:42 crc kubenswrapper[4778]: I0312 14:46:42.414288 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wd89s" event={"ID":"55405211-a853-42e1-b97d-f876cfe8571e","Type":"ContainerStarted","Data":"66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174"} Mar 12 14:46:42 crc kubenswrapper[4778]: I0312 14:46:42.435326 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wd89s" podStartSLOduration=2.862101639 podStartE2EDuration="5.435311607s" podCreationTimestamp="2026-03-12 14:46:37 +0000 UTC" firstStartedPulling="2026-03-12 14:46:39.385927809 +0000 UTC m=+5817.834623205" lastFinishedPulling="2026-03-12 14:46:41.959137777 +0000 UTC m=+5820.407833173" observedRunningTime="2026-03-12 14:46:42.43083781 +0000 UTC m=+5820.879533216" watchObservedRunningTime="2026-03-12 14:46:42.435311607 +0000 UTC m=+5820.884007003" Mar 12 14:46:48 crc kubenswrapper[4778]: I0312 14:46:48.338933 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:48 crc kubenswrapper[4778]: I0312 14:46:48.339482 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:48 crc kubenswrapper[4778]: I0312 14:46:48.400291 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:48 crc kubenswrapper[4778]: I0312 14:46:48.580039 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:48 crc kubenswrapper[4778]: I0312 14:46:48.653580 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wd89s"] Mar 12 14:46:49 crc kubenswrapper[4778]: I0312 14:46:49.253546 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:46:49 crc kubenswrapper[4778]: E0312 14:46:49.254024 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:46:50 crc kubenswrapper[4778]: I0312 14:46:50.480406 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wd89s" podUID="55405211-a853-42e1-b97d-f876cfe8571e" containerName="registry-server" containerID="cri-o://66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174" gracePeriod=2 Mar 12 14:46:50 crc kubenswrapper[4778]: I0312 14:46:50.946163 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.100826 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-catalog-content\") pod \"55405211-a853-42e1-b97d-f876cfe8571e\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.104489 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt7jp\" (UniqueName: \"kubernetes.io/projected/55405211-a853-42e1-b97d-f876cfe8571e-kube-api-access-tt7jp\") pod \"55405211-a853-42e1-b97d-f876cfe8571e\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.104663 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-utilities\") pod \"55405211-a853-42e1-b97d-f876cfe8571e\" (UID: \"55405211-a853-42e1-b97d-f876cfe8571e\") " Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.105278 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-utilities" (OuterVolumeSpecName: "utilities") pod "55405211-a853-42e1-b97d-f876cfe8571e" (UID: "55405211-a853-42e1-b97d-f876cfe8571e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.111419 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55405211-a853-42e1-b97d-f876cfe8571e-kube-api-access-tt7jp" (OuterVolumeSpecName: "kube-api-access-tt7jp") pod "55405211-a853-42e1-b97d-f876cfe8571e" (UID: "55405211-a853-42e1-b97d-f876cfe8571e"). InnerVolumeSpecName "kube-api-access-tt7jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.207586 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt7jp\" (UniqueName: \"kubernetes.io/projected/55405211-a853-42e1-b97d-f876cfe8571e-kube-api-access-tt7jp\") on node \"crc\" DevicePath \"\"" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.207626 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.322094 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55405211-a853-42e1-b97d-f876cfe8571e" (UID: "55405211-a853-42e1-b97d-f876cfe8571e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.412953 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55405211-a853-42e1-b97d-f876cfe8571e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.492527 4778 generic.go:334] "Generic (PLEG): container finished" podID="55405211-a853-42e1-b97d-f876cfe8571e" containerID="66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174" exitCode=0 Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.492576 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wd89s" event={"ID":"55405211-a853-42e1-b97d-f876cfe8571e","Type":"ContainerDied","Data":"66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174"} Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.492611 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wd89s" event={"ID":"55405211-a853-42e1-b97d-f876cfe8571e","Type":"ContainerDied","Data":"e21a0b422cf52b7acf33c1dc0980a93ccaf4f052a0f0d2e75d0729ce68636faa"} Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.492639 4778 scope.go:117] "RemoveContainer" containerID="66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.492799 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wd89s" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.515170 4778 scope.go:117] "RemoveContainer" containerID="267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.541576 4778 scope.go:117] "RemoveContainer" containerID="e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.545920 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wd89s"] Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.558858 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wd89s"] Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.591083 4778 scope.go:117] "RemoveContainer" containerID="66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174" Mar 12 14:46:51 crc kubenswrapper[4778]: E0312 14:46:51.591808 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174\": container with ID starting with 66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174 not found: ID does not exist" containerID="66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.591849 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174"} err="failed to get container status \"66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174\": rpc error: code = NotFound desc = could not find container \"66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174\": container with ID starting with 66a2cd8cc923470afb472e47dd40e5182ac80ecfec264a4d0f93ac2f50632174 not found: ID does not exist" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.591878 4778 scope.go:117] "RemoveContainer" containerID="267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d" Mar 12 14:46:51 crc kubenswrapper[4778]: E0312 14:46:51.592098 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d\": container with ID starting with 267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d not found: ID does not exist" containerID="267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.592124 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d"} err="failed to get container status \"267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d\": rpc error: code = NotFound desc = could not find container \"267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d\": container with ID starting with 267c4937eac5a95221a52aee7d8893da6636a3f2179067570591e31e2d3e940d not found: ID does not exist" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.592138 4778 scope.go:117] "RemoveContainer" containerID="e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f" Mar 12 14:46:51 crc kubenswrapper[4778]: E0312 14:46:51.592544 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f\": container with ID starting with e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f not found: ID does not exist" containerID="e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f" Mar 12 14:46:51 crc kubenswrapper[4778]: I0312 14:46:51.592574 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f"} err="failed to get container status \"e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f\": rpc error: code = NotFound desc = could not find container \"e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f\": container with ID starting with e5e5916972ff38e811789e88f3b01f57b526d86d6bbc87acdd5636aadde5a15f not found: ID does not exist" Mar 12 14:46:52 crc kubenswrapper[4778]: I0312 14:46:52.273369 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55405211-a853-42e1-b97d-f876cfe8571e" path="/var/lib/kubelet/pods/55405211-a853-42e1-b97d-f876cfe8571e/volumes" Mar 12 14:47:00 crc kubenswrapper[4778]: I0312 14:47:00.254612 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:47:00 crc kubenswrapper[4778]: E0312 14:47:00.255565 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:47:12 crc kubenswrapper[4778]: I0312 14:47:12.261485 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:47:12 crc kubenswrapper[4778]: E0312 14:47:12.262747 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.366683 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsnnv"] Mar 12 14:47:13 crc kubenswrapper[4778]: E0312 14:47:13.367301 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55405211-a853-42e1-b97d-f876cfe8571e" containerName="extract-utilities" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.367325 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="55405211-a853-42e1-b97d-f876cfe8571e" containerName="extract-utilities" Mar 12 14:47:13 crc kubenswrapper[4778]: E0312 14:47:13.367378 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55405211-a853-42e1-b97d-f876cfe8571e" containerName="extract-content" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.367391 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="55405211-a853-42e1-b97d-f876cfe8571e" containerName="extract-content" Mar 12 14:47:13 crc kubenswrapper[4778]: E0312 14:47:13.367416 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55405211-a853-42e1-b97d-f876cfe8571e" containerName="registry-server" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.367430 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="55405211-a853-42e1-b97d-f876cfe8571e" containerName="registry-server" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.367732 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="55405211-a853-42e1-b97d-f876cfe8571e" containerName="registry-server" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.369923 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.389919 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsnnv"] Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.446321 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-catalog-content\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.446598 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfhbb\" (UniqueName: \"kubernetes.io/projected/26c2f0ef-3580-429e-8fc5-3ece50c4f023-kube-api-access-rfhbb\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.446862 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-utilities\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.549107 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-utilities\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.549568 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-catalog-content\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.549689 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfhbb\" (UniqueName: \"kubernetes.io/projected/26c2f0ef-3580-429e-8fc5-3ece50c4f023-kube-api-access-rfhbb\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.549569 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-utilities\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.549958 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-catalog-content\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.569250 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfhbb\" (UniqueName: \"kubernetes.io/projected/26c2f0ef-3580-429e-8fc5-3ece50c4f023-kube-api-access-rfhbb\") pod \"certified-operators-fsnnv\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:13 crc kubenswrapper[4778]: I0312 14:47:13.714482 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:14 crc kubenswrapper[4778]: I0312 14:47:14.198716 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsnnv"] Mar 12 14:47:14 crc kubenswrapper[4778]: I0312 14:47:14.703097 4778 generic.go:334] "Generic (PLEG): container finished" podID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerID="231eb24c5f060d0284f56a7b47085c3333eabe57f70491b0a128a5966c6669b1" exitCode=0 Mar 12 14:47:14 crc kubenswrapper[4778]: I0312 14:47:14.703142 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnnv" event={"ID":"26c2f0ef-3580-429e-8fc5-3ece50c4f023","Type":"ContainerDied","Data":"231eb24c5f060d0284f56a7b47085c3333eabe57f70491b0a128a5966c6669b1"} Mar 12 14:47:14 crc kubenswrapper[4778]: I0312 14:47:14.703431 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnnv" event={"ID":"26c2f0ef-3580-429e-8fc5-3ece50c4f023","Type":"ContainerStarted","Data":"8b1acbfd6f53aab1c6cf025f822a3ebe1171651a9a7c34804adfad48d82717bc"} Mar 12 14:47:14 crc kubenswrapper[4778]: I0312 14:47:14.705040 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:47:15 crc kubenswrapper[4778]: I0312 14:47:15.724869 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnnv" event={"ID":"26c2f0ef-3580-429e-8fc5-3ece50c4f023","Type":"ContainerStarted","Data":"b2b21cf4b6fbb46f6ba37f27234667ec7d7d7258895ed6cd412a23268a2bbaca"} Mar 12 14:47:17 crc kubenswrapper[4778]: I0312 14:47:17.754130 4778 generic.go:334] "Generic (PLEG): container finished" podID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerID="b2b21cf4b6fbb46f6ba37f27234667ec7d7d7258895ed6cd412a23268a2bbaca" exitCode=0 Mar 12 14:47:17 crc kubenswrapper[4778]: I0312 14:47:17.754207 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnnv" event={"ID":"26c2f0ef-3580-429e-8fc5-3ece50c4f023","Type":"ContainerDied","Data":"b2b21cf4b6fbb46f6ba37f27234667ec7d7d7258895ed6cd412a23268a2bbaca"} Mar 12 14:47:18 crc kubenswrapper[4778]: I0312 14:47:18.764882 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnnv" event={"ID":"26c2f0ef-3580-429e-8fc5-3ece50c4f023","Type":"ContainerStarted","Data":"c72b4f2bff4ba063399bb11f18084a9908fb796531a42d291dab6f2ebe39f85b"} Mar 12 14:47:18 crc kubenswrapper[4778]: I0312 14:47:18.786128 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsnnv" podStartSLOduration=2.322515064 podStartE2EDuration="5.786104129s" podCreationTimestamp="2026-03-12 14:47:13 +0000 UTC" firstStartedPulling="2026-03-12 14:47:14.704795809 +0000 UTC m=+5853.153491205" lastFinishedPulling="2026-03-12 14:47:18.168384864 +0000 UTC m=+5856.617080270" observedRunningTime="2026-03-12 14:47:18.780074887 +0000 UTC m=+5857.228770293" watchObservedRunningTime="2026-03-12 14:47:18.786104129 +0000 UTC m=+5857.234799525" Mar 12 14:47:23 crc kubenswrapper[4778]: I0312 14:47:23.254424 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:47:23 crc kubenswrapper[4778]: E0312 14:47:23.254939 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:47:23 crc kubenswrapper[4778]: I0312 14:47:23.714859 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:23 crc kubenswrapper[4778]: I0312 14:47:23.716229 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:23 crc kubenswrapper[4778]: I0312 14:47:23.763670 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:23 crc kubenswrapper[4778]: I0312 14:47:23.882433 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:24 crc kubenswrapper[4778]: I0312 14:47:23.999983 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsnnv"] Mar 12 14:47:25 crc kubenswrapper[4778]: I0312 14:47:25.830665 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsnnv" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerName="registry-server" containerID="cri-o://c72b4f2bff4ba063399bb11f18084a9908fb796531a42d291dab6f2ebe39f85b" gracePeriod=2 Mar 12 14:47:26 crc kubenswrapper[4778]: I0312 14:47:26.847801 4778 generic.go:334] "Generic (PLEG): container finished" podID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerID="c72b4f2bff4ba063399bb11f18084a9908fb796531a42d291dab6f2ebe39f85b" exitCode=0 Mar 12 14:47:26 crc kubenswrapper[4778]: I0312 14:47:26.847937 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnnv" event={"ID":"26c2f0ef-3580-429e-8fc5-3ece50c4f023","Type":"ContainerDied","Data":"c72b4f2bff4ba063399bb11f18084a9908fb796531a42d291dab6f2ebe39f85b"} Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.581358 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.639557 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfhbb\" (UniqueName: \"kubernetes.io/projected/26c2f0ef-3580-429e-8fc5-3ece50c4f023-kube-api-access-rfhbb\") pod \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.639636 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-utilities\") pod \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.639750 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-catalog-content\") pod \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\" (UID: \"26c2f0ef-3580-429e-8fc5-3ece50c4f023\") " Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.640643 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-utilities" (OuterVolumeSpecName: "utilities") pod "26c2f0ef-3580-429e-8fc5-3ece50c4f023" (UID: "26c2f0ef-3580-429e-8fc5-3ece50c4f023"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.645713 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c2f0ef-3580-429e-8fc5-3ece50c4f023-kube-api-access-rfhbb" (OuterVolumeSpecName: "kube-api-access-rfhbb") pod "26c2f0ef-3580-429e-8fc5-3ece50c4f023" (UID: "26c2f0ef-3580-429e-8fc5-3ece50c4f023"). InnerVolumeSpecName "kube-api-access-rfhbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.697632 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26c2f0ef-3580-429e-8fc5-3ece50c4f023" (UID: "26c2f0ef-3580-429e-8fc5-3ece50c4f023"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.742521 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfhbb\" (UniqueName: \"kubernetes.io/projected/26c2f0ef-3580-429e-8fc5-3ece50c4f023-kube-api-access-rfhbb\") on node \"crc\" DevicePath \"\"" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.742555 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.742564 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26c2f0ef-3580-429e-8fc5-3ece50c4f023-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.859856 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsnnv" event={"ID":"26c2f0ef-3580-429e-8fc5-3ece50c4f023","Type":"ContainerDied","Data":"8b1acbfd6f53aab1c6cf025f822a3ebe1171651a9a7c34804adfad48d82717bc"} Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.859906 4778 scope.go:117] "RemoveContainer" containerID="c72b4f2bff4ba063399bb11f18084a9908fb796531a42d291dab6f2ebe39f85b" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.860036 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsnnv" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.885129 4778 scope.go:117] "RemoveContainer" containerID="b2b21cf4b6fbb46f6ba37f27234667ec7d7d7258895ed6cd412a23268a2bbaca" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.908039 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsnnv"] Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.910963 4778 scope.go:117] "RemoveContainer" containerID="231eb24c5f060d0284f56a7b47085c3333eabe57f70491b0a128a5966c6669b1" Mar 12 14:47:27 crc kubenswrapper[4778]: I0312 14:47:27.926936 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsnnv"] Mar 12 14:47:28 crc kubenswrapper[4778]: I0312 14:47:28.264165 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" path="/var/lib/kubelet/pods/26c2f0ef-3580-429e-8fc5-3ece50c4f023/volumes" Mar 12 14:47:37 crc kubenswrapper[4778]: I0312 14:47:37.254480 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:47:37 crc kubenswrapper[4778]: E0312 14:47:37.255317 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:47:50 crc kubenswrapper[4778]: I0312 14:47:50.254992 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:47:50 crc kubenswrapper[4778]: E0312 14:47:50.256108 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.154331 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555448-g9lqj"] Mar 12 14:48:00 crc kubenswrapper[4778]: E0312 14:48:00.155391 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerName="extract-utilities" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.155407 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerName="extract-utilities" Mar 12 14:48:00 crc kubenswrapper[4778]: E0312 14:48:00.155451 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerName="registry-server" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.155460 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerName="registry-server" Mar 12 14:48:00 crc kubenswrapper[4778]: E0312 14:48:00.155483 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerName="extract-content" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.155491 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerName="extract-content" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.155716 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c2f0ef-3580-429e-8fc5-3ece50c4f023" containerName="registry-server" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.156592 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555448-g9lqj" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.159769 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.159994 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.160145 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.167254 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555448-g9lqj"] Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.296438 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqb64\" (UniqueName: \"kubernetes.io/projected/53a65a66-de4e-413e-a175-4d12db4e3f26-kube-api-access-zqb64\") pod \"auto-csr-approver-29555448-g9lqj\" (UID: \"53a65a66-de4e-413e-a175-4d12db4e3f26\") " pod="openshift-infra/auto-csr-approver-29555448-g9lqj" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.398770 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqb64\" (UniqueName: \"kubernetes.io/projected/53a65a66-de4e-413e-a175-4d12db4e3f26-kube-api-access-zqb64\") pod \"auto-csr-approver-29555448-g9lqj\" (UID: \"53a65a66-de4e-413e-a175-4d12db4e3f26\") " pod="openshift-infra/auto-csr-approver-29555448-g9lqj" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.428318 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqb64\" (UniqueName: \"kubernetes.io/projected/53a65a66-de4e-413e-a175-4d12db4e3f26-kube-api-access-zqb64\") pod \"auto-csr-approver-29555448-g9lqj\" (UID: \"53a65a66-de4e-413e-a175-4d12db4e3f26\") " pod="openshift-infra/auto-csr-approver-29555448-g9lqj" Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.488607 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555448-g9lqj" Mar 12 14:48:00 crc kubenswrapper[4778]: W0312 14:48:00.959655 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53a65a66_de4e_413e_a175_4d12db4e3f26.slice/crio-c4c03bcf2cbf592eb2e1d01dd3fcd63207b161e20b0137bceffe7a12716b1809 WatchSource:0}: Error finding container c4c03bcf2cbf592eb2e1d01dd3fcd63207b161e20b0137bceffe7a12716b1809: Status 404 returned error can't find the container with id c4c03bcf2cbf592eb2e1d01dd3fcd63207b161e20b0137bceffe7a12716b1809 Mar 12 14:48:00 crc kubenswrapper[4778]: I0312 14:48:00.963096 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555448-g9lqj"] Mar 12 14:48:01 crc kubenswrapper[4778]: I0312 14:48:01.178587 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555448-g9lqj" event={"ID":"53a65a66-de4e-413e-a175-4d12db4e3f26","Type":"ContainerStarted","Data":"c4c03bcf2cbf592eb2e1d01dd3fcd63207b161e20b0137bceffe7a12716b1809"} Mar 12 14:48:03 crc kubenswrapper[4778]: I0312 14:48:03.200403 4778 generic.go:334] "Generic (PLEG): container finished" podID="53a65a66-de4e-413e-a175-4d12db4e3f26" containerID="26a11a81934702ff4eaece8862eb99dd5a6954a851baea01b2b49d973eba34bc" exitCode=0 Mar 12 14:48:03 crc kubenswrapper[4778]: I0312 14:48:03.200465 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555448-g9lqj" event={"ID":"53a65a66-de4e-413e-a175-4d12db4e3f26","Type":"ContainerDied","Data":"26a11a81934702ff4eaece8862eb99dd5a6954a851baea01b2b49d973eba34bc"} Mar 12 14:48:03 crc kubenswrapper[4778]: I0312 14:48:03.254604 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:48:03 crc kubenswrapper[4778]: E0312 14:48:03.255403 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:48:04 crc kubenswrapper[4778]: I0312 14:48:04.610333 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555448-g9lqj" Mar 12 14:48:04 crc kubenswrapper[4778]: I0312 14:48:04.791614 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqb64\" (UniqueName: \"kubernetes.io/projected/53a65a66-de4e-413e-a175-4d12db4e3f26-kube-api-access-zqb64\") pod \"53a65a66-de4e-413e-a175-4d12db4e3f26\" (UID: \"53a65a66-de4e-413e-a175-4d12db4e3f26\") " Mar 12 14:48:04 crc kubenswrapper[4778]: I0312 14:48:04.807420 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a65a66-de4e-413e-a175-4d12db4e3f26-kube-api-access-zqb64" (OuterVolumeSpecName: "kube-api-access-zqb64") pod "53a65a66-de4e-413e-a175-4d12db4e3f26" (UID: "53a65a66-de4e-413e-a175-4d12db4e3f26"). InnerVolumeSpecName "kube-api-access-zqb64". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:48:04 crc kubenswrapper[4778]: I0312 14:48:04.894435 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqb64\" (UniqueName: \"kubernetes.io/projected/53a65a66-de4e-413e-a175-4d12db4e3f26-kube-api-access-zqb64\") on node \"crc\" DevicePath \"\"" Mar 12 14:48:05 crc kubenswrapper[4778]: I0312 14:48:05.217840 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555448-g9lqj" event={"ID":"53a65a66-de4e-413e-a175-4d12db4e3f26","Type":"ContainerDied","Data":"c4c03bcf2cbf592eb2e1d01dd3fcd63207b161e20b0137bceffe7a12716b1809"} Mar 12 14:48:05 crc kubenswrapper[4778]: I0312 14:48:05.218139 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4c03bcf2cbf592eb2e1d01dd3fcd63207b161e20b0137bceffe7a12716b1809" Mar 12 14:48:05 crc kubenswrapper[4778]: I0312 14:48:05.218068 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555448-g9lqj" Mar 12 14:48:05 crc kubenswrapper[4778]: I0312 14:48:05.687090 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555442-c58f5"] Mar 12 14:48:05 crc kubenswrapper[4778]: I0312 14:48:05.695315 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555442-c58f5"] Mar 12 14:48:06 crc kubenswrapper[4778]: I0312 14:48:06.264794 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86de30c2-8699-4966-8c1b-da67cdacae42" path="/var/lib/kubelet/pods/86de30c2-8699-4966-8c1b-da67cdacae42/volumes" Mar 12 14:48:14 crc kubenswrapper[4778]: I0312 14:48:14.254053 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:48:14 crc kubenswrapper[4778]: E0312 14:48:14.254820 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:48:23 crc kubenswrapper[4778]: I0312 14:48:23.757786 4778 scope.go:117] "RemoveContainer" containerID="629d070304f0ca91f60ef09f2871ae160406fbd39c685129feb137e8e63e7888" Mar 12 14:48:28 crc kubenswrapper[4778]: I0312 14:48:28.254447 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:48:28 crc kubenswrapper[4778]: E0312 14:48:28.255447 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:48:40 crc kubenswrapper[4778]: I0312 14:48:40.254555 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:48:40 crc kubenswrapper[4778]: E0312 14:48:40.255412 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:48:53 crc kubenswrapper[4778]: I0312 14:48:53.255035 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:48:53 crc kubenswrapper[4778]: E0312 14:48:53.256360 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:49:04 crc kubenswrapper[4778]: I0312 14:49:04.254708 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:49:04 crc kubenswrapper[4778]: E0312 14:49:04.255708 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:49:16 crc kubenswrapper[4778]: I0312 14:49:16.254396 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:49:16 crc kubenswrapper[4778]: E0312 14:49:16.255424 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:49:29 crc kubenswrapper[4778]: I0312 14:49:29.260744 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:49:29 crc kubenswrapper[4778]: E0312 14:49:29.261904 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:49:43 crc kubenswrapper[4778]: I0312 14:49:43.255027 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:49:43 crc kubenswrapper[4778]: E0312 14:49:43.259360 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:49:55 crc kubenswrapper[4778]: I0312 14:49:55.254849 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:49:55 crc kubenswrapper[4778]: E0312 14:49:55.256299 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.175237 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555450-wng5r"] Mar 12 14:50:00 crc kubenswrapper[4778]: E0312 14:50:00.176136 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a65a66-de4e-413e-a175-4d12db4e3f26" containerName="oc" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.176152 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a65a66-de4e-413e-a175-4d12db4e3f26" containerName="oc" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.176359 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a65a66-de4e-413e-a175-4d12db4e3f26" containerName="oc" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.176994 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555450-wng5r" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.181010 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.181071 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.181010 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.190367 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555450-wng5r"] Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.358669 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44dqg\" (UniqueName: \"kubernetes.io/projected/58bc575b-de62-42e8-8393-0cdebe8a1ec7-kube-api-access-44dqg\") pod \"auto-csr-approver-29555450-wng5r\" (UID: \"58bc575b-de62-42e8-8393-0cdebe8a1ec7\") " pod="openshift-infra/auto-csr-approver-29555450-wng5r" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.461588 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44dqg\" (UniqueName: \"kubernetes.io/projected/58bc575b-de62-42e8-8393-0cdebe8a1ec7-kube-api-access-44dqg\") pod \"auto-csr-approver-29555450-wng5r\" (UID: \"58bc575b-de62-42e8-8393-0cdebe8a1ec7\") " pod="openshift-infra/auto-csr-approver-29555450-wng5r" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.485120 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44dqg\" (UniqueName: \"kubernetes.io/projected/58bc575b-de62-42e8-8393-0cdebe8a1ec7-kube-api-access-44dqg\") pod \"auto-csr-approver-29555450-wng5r\" (UID: \"58bc575b-de62-42e8-8393-0cdebe8a1ec7\") " pod="openshift-infra/auto-csr-approver-29555450-wng5r" Mar 12 14:50:00 crc kubenswrapper[4778]: I0312 14:50:00.507522 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555450-wng5r" Mar 12 14:50:01 crc kubenswrapper[4778]: I0312 14:50:01.029309 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555450-wng5r"] Mar 12 14:50:01 crc kubenswrapper[4778]: I0312 14:50:01.427057 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555450-wng5r" event={"ID":"58bc575b-de62-42e8-8393-0cdebe8a1ec7","Type":"ContainerStarted","Data":"9ed451c9d70977425643f13fc74daea81c277d539cb67ad214466aceb2d38ff6"} Mar 12 14:50:03 crc kubenswrapper[4778]: I0312 14:50:03.458458 4778 generic.go:334] "Generic (PLEG): container finished" podID="58bc575b-de62-42e8-8393-0cdebe8a1ec7" containerID="b673bf4baccca3cbc88953e0302f6d44002e09551d5876af0fec26f563392bf0" exitCode=0 Mar 12 14:50:03 crc kubenswrapper[4778]: I0312 14:50:03.458709 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555450-wng5r" event={"ID":"58bc575b-de62-42e8-8393-0cdebe8a1ec7","Type":"ContainerDied","Data":"b673bf4baccca3cbc88953e0302f6d44002e09551d5876af0fec26f563392bf0"} Mar 12 14:50:04 crc kubenswrapper[4778]: I0312 14:50:04.881132 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555450-wng5r" Mar 12 14:50:04 crc kubenswrapper[4778]: I0312 14:50:04.953823 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44dqg\" (UniqueName: \"kubernetes.io/projected/58bc575b-de62-42e8-8393-0cdebe8a1ec7-kube-api-access-44dqg\") pod \"58bc575b-de62-42e8-8393-0cdebe8a1ec7\" (UID: \"58bc575b-de62-42e8-8393-0cdebe8a1ec7\") " Mar 12 14:50:04 crc kubenswrapper[4778]: I0312 14:50:04.959564 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58bc575b-de62-42e8-8393-0cdebe8a1ec7-kube-api-access-44dqg" (OuterVolumeSpecName: "kube-api-access-44dqg") pod "58bc575b-de62-42e8-8393-0cdebe8a1ec7" (UID: "58bc575b-de62-42e8-8393-0cdebe8a1ec7"). InnerVolumeSpecName "kube-api-access-44dqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:50:05 crc kubenswrapper[4778]: I0312 14:50:05.056508 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44dqg\" (UniqueName: \"kubernetes.io/projected/58bc575b-de62-42e8-8393-0cdebe8a1ec7-kube-api-access-44dqg\") on node \"crc\" DevicePath \"\"" Mar 12 14:50:05 crc kubenswrapper[4778]: I0312 14:50:05.476527 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555450-wng5r" event={"ID":"58bc575b-de62-42e8-8393-0cdebe8a1ec7","Type":"ContainerDied","Data":"9ed451c9d70977425643f13fc74daea81c277d539cb67ad214466aceb2d38ff6"} Mar 12 14:50:05 crc kubenswrapper[4778]: I0312 14:50:05.476570 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ed451c9d70977425643f13fc74daea81c277d539cb67ad214466aceb2d38ff6" Mar 12 14:50:05 crc kubenswrapper[4778]: I0312 14:50:05.476597 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555450-wng5r" Mar 12 14:50:05 crc kubenswrapper[4778]: I0312 14:50:05.954707 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555444-qlvql"] Mar 12 14:50:05 crc kubenswrapper[4778]: I0312 14:50:05.966173 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555444-qlvql"] Mar 12 14:50:06 crc kubenswrapper[4778]: I0312 14:50:06.254936 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:50:06 crc kubenswrapper[4778]: E0312 14:50:06.255756 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:50:06 crc kubenswrapper[4778]: I0312 14:50:06.273900 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da8caed-335b-4730-a69b-b724585893f0" path="/var/lib/kubelet/pods/0da8caed-335b-4730-a69b-b724585893f0/volumes" Mar 12 14:50:21 crc kubenswrapper[4778]: I0312 14:50:21.254650 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:50:21 crc kubenswrapper[4778]: E0312 14:50:21.255815 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:50:23 crc kubenswrapper[4778]: I0312 14:50:23.921073 4778 scope.go:117] "RemoveContainer" containerID="1c045d93602ba35174db559893247fb3da6916ea0a208015b77cd0f47781b091" Mar 12 14:50:32 crc kubenswrapper[4778]: I0312 14:50:32.263551 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:50:32 crc kubenswrapper[4778]: I0312 14:50:32.757428 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"f9dc5323f20567a96d1ddcd61f28e57c1fb446407246116e9b85f41f7b862a79"} Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.187356 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555452-crwhx"] Mar 12 14:52:00 crc kubenswrapper[4778]: E0312 14:52:00.188224 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bc575b-de62-42e8-8393-0cdebe8a1ec7" containerName="oc" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.188237 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bc575b-de62-42e8-8393-0cdebe8a1ec7" containerName="oc" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.188437 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="58bc575b-de62-42e8-8393-0cdebe8a1ec7" containerName="oc" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.189036 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555452-crwhx" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.194268 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.194300 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.195123 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.198012 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555452-crwhx"] Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.256507 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2pl\" (UniqueName: \"kubernetes.io/projected/db3a8d80-262e-4c92-b07a-dcff65e0cd47-kube-api-access-bg2pl\") pod \"auto-csr-approver-29555452-crwhx\" (UID: \"db3a8d80-262e-4c92-b07a-dcff65e0cd47\") " pod="openshift-infra/auto-csr-approver-29555452-crwhx" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.357815 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg2pl\" (UniqueName: \"kubernetes.io/projected/db3a8d80-262e-4c92-b07a-dcff65e0cd47-kube-api-access-bg2pl\") pod \"auto-csr-approver-29555452-crwhx\" (UID: \"db3a8d80-262e-4c92-b07a-dcff65e0cd47\") " pod="openshift-infra/auto-csr-approver-29555452-crwhx" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.379596 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg2pl\" (UniqueName: \"kubernetes.io/projected/db3a8d80-262e-4c92-b07a-dcff65e0cd47-kube-api-access-bg2pl\") pod \"auto-csr-approver-29555452-crwhx\" (UID: \"db3a8d80-262e-4c92-b07a-dcff65e0cd47\") " pod="openshift-infra/auto-csr-approver-29555452-crwhx" Mar 12 14:52:00 crc kubenswrapper[4778]: I0312 14:52:00.506917 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555452-crwhx" Mar 12 14:52:01 crc kubenswrapper[4778]: I0312 14:52:01.010091 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555452-crwhx"] Mar 12 14:52:01 crc kubenswrapper[4778]: I0312 14:52:01.667715 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555452-crwhx" event={"ID":"db3a8d80-262e-4c92-b07a-dcff65e0cd47","Type":"ContainerStarted","Data":"f38c768e364d8f115a87d0bf5c94ef1dff3ea8fb8be912ee295bb78ac0faa144"} Mar 12 14:52:02 crc kubenswrapper[4778]: I0312 14:52:02.682076 4778 generic.go:334] "Generic (PLEG): container finished" podID="db3a8d80-262e-4c92-b07a-dcff65e0cd47" containerID="2e201785308313aa155d17696c3a92cd860cbcfcbc51f75878f68248fd82d5d8" exitCode=0 Mar 12 14:52:02 crc kubenswrapper[4778]: I0312 14:52:02.682145 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555452-crwhx" event={"ID":"db3a8d80-262e-4c92-b07a-dcff65e0cd47","Type":"ContainerDied","Data":"2e201785308313aa155d17696c3a92cd860cbcfcbc51f75878f68248fd82d5d8"} Mar 12 14:52:04 crc kubenswrapper[4778]: I0312 14:52:04.090311 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555452-crwhx" Mar 12 14:52:04 crc kubenswrapper[4778]: I0312 14:52:04.157344 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg2pl\" (UniqueName: \"kubernetes.io/projected/db3a8d80-262e-4c92-b07a-dcff65e0cd47-kube-api-access-bg2pl\") pod \"db3a8d80-262e-4c92-b07a-dcff65e0cd47\" (UID: \"db3a8d80-262e-4c92-b07a-dcff65e0cd47\") " Mar 12 14:52:04 crc kubenswrapper[4778]: I0312 14:52:04.163551 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3a8d80-262e-4c92-b07a-dcff65e0cd47-kube-api-access-bg2pl" (OuterVolumeSpecName: "kube-api-access-bg2pl") pod "db3a8d80-262e-4c92-b07a-dcff65e0cd47" (UID: "db3a8d80-262e-4c92-b07a-dcff65e0cd47"). InnerVolumeSpecName "kube-api-access-bg2pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:52:04 crc kubenswrapper[4778]: I0312 14:52:04.259648 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg2pl\" (UniqueName: \"kubernetes.io/projected/db3a8d80-262e-4c92-b07a-dcff65e0cd47-kube-api-access-bg2pl\") on node \"crc\" DevicePath \"\"" Mar 12 14:52:04 crc kubenswrapper[4778]: I0312 14:52:04.710554 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555452-crwhx" event={"ID":"db3a8d80-262e-4c92-b07a-dcff65e0cd47","Type":"ContainerDied","Data":"f38c768e364d8f115a87d0bf5c94ef1dff3ea8fb8be912ee295bb78ac0faa144"} Mar 12 14:52:04 crc kubenswrapper[4778]: I0312 14:52:04.710628 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f38c768e364d8f115a87d0bf5c94ef1dff3ea8fb8be912ee295bb78ac0faa144" Mar 12 14:52:04 crc kubenswrapper[4778]: I0312 14:52:04.710658 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555452-crwhx" Mar 12 14:52:05 crc kubenswrapper[4778]: I0312 14:52:05.169743 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555446-8n794"] Mar 12 14:52:05 crc kubenswrapper[4778]: I0312 14:52:05.179631 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555446-8n794"] Mar 12 14:52:06 crc kubenswrapper[4778]: I0312 14:52:06.274691 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c23ecc-c75b-450b-a8ff-351acf5384eb" path="/var/lib/kubelet/pods/39c23ecc-c75b-450b-a8ff-351acf5384eb/volumes" Mar 12 14:52:24 crc kubenswrapper[4778]: I0312 14:52:24.066541 4778 scope.go:117] "RemoveContainer" containerID="7fa3212d8016436bf15f0a3b9362ee13f653d69a236beda9f2c5ab4f28324438" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.362762 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5ksjs"] Mar 12 14:52:33 crc kubenswrapper[4778]: E0312 14:52:33.363866 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3a8d80-262e-4c92-b07a-dcff65e0cd47" containerName="oc" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.363882 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3a8d80-262e-4c92-b07a-dcff65e0cd47" containerName="oc" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.364131 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3a8d80-262e-4c92-b07a-dcff65e0cd47" containerName="oc" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.365707 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.382832 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ksjs"] Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.495962 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-catalog-content\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.496056 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-utilities\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.496542 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x49kn\" (UniqueName: \"kubernetes.io/projected/5ad61852-35ce-4f63-8876-d1231244f3a2-kube-api-access-x49kn\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.598183 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x49kn\" (UniqueName: \"kubernetes.io/projected/5ad61852-35ce-4f63-8876-d1231244f3a2-kube-api-access-x49kn\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.598499 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-catalog-content\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.598566 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-utilities\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.599114 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-utilities\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.599215 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-catalog-content\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.619107 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x49kn\" (UniqueName: \"kubernetes.io/projected/5ad61852-35ce-4f63-8876-d1231244f3a2-kube-api-access-x49kn\") pod \"redhat-marketplace-5ksjs\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:33 crc kubenswrapper[4778]: I0312 14:52:33.687456 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:52:34 crc kubenswrapper[4778]: I0312 14:52:34.168674 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ksjs"] Mar 12 14:52:35 crc kubenswrapper[4778]: I0312 14:52:35.093851 4778 generic.go:334] "Generic (PLEG): container finished" podID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerID="a7c672d4b0c14852ce23d5decfb704e09ea74bef2e325221b4069fcc117c4976" exitCode=0 Mar 12 14:52:35 crc kubenswrapper[4778]: I0312 14:52:35.093920 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ksjs" event={"ID":"5ad61852-35ce-4f63-8876-d1231244f3a2","Type":"ContainerDied","Data":"a7c672d4b0c14852ce23d5decfb704e09ea74bef2e325221b4069fcc117c4976"} Mar 12 14:52:35 crc kubenswrapper[4778]: I0312 14:52:35.093985 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ksjs" event={"ID":"5ad61852-35ce-4f63-8876-d1231244f3a2","Type":"ContainerStarted","Data":"7b0e362a57eae3105936688cad6d42fcb7298e40ea92f03730e3ac287d78abff"} Mar 12 14:52:35 crc kubenswrapper[4778]: I0312 14:52:35.097963 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:52:36 crc kubenswrapper[4778]: I0312 14:52:36.105328 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ksjs" event={"ID":"5ad61852-35ce-4f63-8876-d1231244f3a2","Type":"ContainerStarted","Data":"6a4c01059a32908d762eeffa04e27aa1d1e32ba2e0e5fac8ef80086f88864c0c"} Mar 12 14:52:38 crc kubenswrapper[4778]: I0312 14:52:38.032264 4778 generic.go:334] "Generic (PLEG): container finished" podID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerID="6a4c01059a32908d762eeffa04e27aa1d1e32ba2e0e5fac8ef80086f88864c0c" exitCode=0 Mar 12 14:52:38 crc kubenswrapper[4778]: I0312 14:52:38.034806 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ksjs" event={"ID":"5ad61852-35ce-4f63-8876-d1231244f3a2","Type":"ContainerDied","Data":"6a4c01059a32908d762eeffa04e27aa1d1e32ba2e0e5fac8ef80086f88864c0c"} Mar 12 14:52:38 crc kubenswrapper[4778]: E0312 14:52:38.153615 4778 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Mar 12 14:52:38 crc kubenswrapper[4778]: E0312 14:52:38.153791 4778 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:20MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x49kn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5ksjs_openshift-marketplace(5ad61852-35ce-4f63-8876-d1231244f3a2): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)" logger="UnhandledError" Mar 12 14:52:38 crc kubenswrapper[4778]: E0312 14:52:38.156101 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 502 (Bad Gateway)\"" pod="openshift-marketplace/redhat-marketplace-5ksjs" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" Mar 12 14:52:39 crc kubenswrapper[4778]: E0312 14:52:39.051526 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-marketplace-5ksjs" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" Mar 12 14:52:55 crc kubenswrapper[4778]: I0312 14:52:55.225224 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ksjs" event={"ID":"5ad61852-35ce-4f63-8876-d1231244f3a2","Type":"ContainerStarted","Data":"a07c96607ff2d89770d88eed7cd8da0599a0a5a80c5f7751bcfe6d3fc67c632e"} Mar 12 14:52:55 crc kubenswrapper[4778]: I0312 14:52:55.261249 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5ksjs" podStartSLOduration=2.631164154 podStartE2EDuration="22.261222032s" podCreationTimestamp="2026-03-12 14:52:33 +0000 UTC" firstStartedPulling="2026-03-12 14:52:35.09763692 +0000 UTC m=+6173.546332326" lastFinishedPulling="2026-03-12 14:52:54.727694778 +0000 UTC m=+6193.176390204" observedRunningTime="2026-03-12 14:52:55.250172978 +0000 UTC m=+6193.698868374" watchObservedRunningTime="2026-03-12 14:52:55.261222032 +0000 UTC m=+6193.709917458" Mar 12 14:52:58 crc kubenswrapper[4778]: I0312 14:52:58.557409 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:52:58 crc kubenswrapper[4778]: I0312 14:52:58.558044 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:53:03 crc kubenswrapper[4778]: I0312 14:53:03.688460 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:53:03 crc kubenswrapper[4778]: I0312 14:53:03.688792 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:53:03 crc kubenswrapper[4778]: I0312 14:53:03.776541 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:53:04 crc kubenswrapper[4778]: I0312 14:53:04.377113 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:53:07 crc kubenswrapper[4778]: I0312 14:53:07.923631 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ksjs"] Mar 12 14:53:07 crc kubenswrapper[4778]: I0312 14:53:07.928969 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5ksjs" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerName="registry-server" containerID="cri-o://a07c96607ff2d89770d88eed7cd8da0599a0a5a80c5f7751bcfe6d3fc67c632e" gracePeriod=2 Mar 12 14:53:08 crc kubenswrapper[4778]: I0312 14:53:08.918734 4778 generic.go:334] "Generic (PLEG): container finished" podID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerID="a07c96607ff2d89770d88eed7cd8da0599a0a5a80c5f7751bcfe6d3fc67c632e" exitCode=0 Mar 12 14:53:08 crc kubenswrapper[4778]: I0312 14:53:08.918776 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ksjs" event={"ID":"5ad61852-35ce-4f63-8876-d1231244f3a2","Type":"ContainerDied","Data":"a07c96607ff2d89770d88eed7cd8da0599a0a5a80c5f7751bcfe6d3fc67c632e"} Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.137656 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.259499 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-catalog-content\") pod \"5ad61852-35ce-4f63-8876-d1231244f3a2\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.259554 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-utilities\") pod \"5ad61852-35ce-4f63-8876-d1231244f3a2\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.259633 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x49kn\" (UniqueName: \"kubernetes.io/projected/5ad61852-35ce-4f63-8876-d1231244f3a2-kube-api-access-x49kn\") pod \"5ad61852-35ce-4f63-8876-d1231244f3a2\" (UID: \"5ad61852-35ce-4f63-8876-d1231244f3a2\") " Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.260408 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-utilities" (OuterVolumeSpecName: "utilities") pod "5ad61852-35ce-4f63-8876-d1231244f3a2" (UID: "5ad61852-35ce-4f63-8876-d1231244f3a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.264947 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad61852-35ce-4f63-8876-d1231244f3a2-kube-api-access-x49kn" (OuterVolumeSpecName: "kube-api-access-x49kn") pod "5ad61852-35ce-4f63-8876-d1231244f3a2" (UID: "5ad61852-35ce-4f63-8876-d1231244f3a2"). InnerVolumeSpecName "kube-api-access-x49kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.289067 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ad61852-35ce-4f63-8876-d1231244f3a2" (UID: "5ad61852-35ce-4f63-8876-d1231244f3a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.362631 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.362667 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad61852-35ce-4f63-8876-d1231244f3a2-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.362698 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x49kn\" (UniqueName: \"kubernetes.io/projected/5ad61852-35ce-4f63-8876-d1231244f3a2-kube-api-access-x49kn\") on node \"crc\" DevicePath \"\"" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.933174 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ksjs" event={"ID":"5ad61852-35ce-4f63-8876-d1231244f3a2","Type":"ContainerDied","Data":"7b0e362a57eae3105936688cad6d42fcb7298e40ea92f03730e3ac287d78abff"} Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.933561 4778 scope.go:117] "RemoveContainer" containerID="a07c96607ff2d89770d88eed7cd8da0599a0a5a80c5f7751bcfe6d3fc67c632e" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.933261 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ksjs" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.967009 4778 scope.go:117] "RemoveContainer" containerID="6a4c01059a32908d762eeffa04e27aa1d1e32ba2e0e5fac8ef80086f88864c0c" Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.973718 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ksjs"] Mar 12 14:53:09 crc kubenswrapper[4778]: I0312 14:53:09.982080 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ksjs"] Mar 12 14:53:10 crc kubenswrapper[4778]: I0312 14:53:10.016777 4778 scope.go:117] "RemoveContainer" containerID="a7c672d4b0c14852ce23d5decfb704e09ea74bef2e325221b4069fcc117c4976" Mar 12 14:53:10 crc kubenswrapper[4778]: I0312 14:53:10.268648 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" path="/var/lib/kubelet/pods/5ad61852-35ce-4f63-8876-d1231244f3a2/volumes" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.079812 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7g7g"] Mar 12 14:53:25 crc kubenswrapper[4778]: E0312 14:53:25.081130 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerName="registry-server" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.081153 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerName="registry-server" Mar 12 14:53:25 crc kubenswrapper[4778]: E0312 14:53:25.081225 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerName="extract-utilities" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.081237 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerName="extract-utilities" Mar 12 14:53:25 crc kubenswrapper[4778]: E0312 14:53:25.081271 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerName="extract-content" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.081281 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerName="extract-content" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.081603 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad61852-35ce-4f63-8876-d1231244f3a2" containerName="registry-server" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.084095 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.106236 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7g7g"] Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.198295 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-utilities\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.199364 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-catalog-content\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.199811 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdtk5\" (UniqueName: \"kubernetes.io/projected/3a389bb4-d9de-42e4-911c-1a07358309b3-kube-api-access-fdtk5\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.302045 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-utilities\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.302139 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-catalog-content\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.302264 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdtk5\" (UniqueName: \"kubernetes.io/projected/3a389bb4-d9de-42e4-911c-1a07358309b3-kube-api-access-fdtk5\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.305943 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-utilities\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.310986 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-catalog-content\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.322882 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdtk5\" (UniqueName: \"kubernetes.io/projected/3a389bb4-d9de-42e4-911c-1a07358309b3-kube-api-access-fdtk5\") pod \"redhat-operators-x7g7g\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.406346 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:25 crc kubenswrapper[4778]: I0312 14:53:25.867137 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7g7g"] Mar 12 14:53:26 crc kubenswrapper[4778]: I0312 14:53:26.240559 4778 generic.go:334] "Generic (PLEG): container finished" podID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerID="9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73" exitCode=0 Mar 12 14:53:26 crc kubenswrapper[4778]: I0312 14:53:26.240618 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7g7g" event={"ID":"3a389bb4-d9de-42e4-911c-1a07358309b3","Type":"ContainerDied","Data":"9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73"} Mar 12 14:53:26 crc kubenswrapper[4778]: I0312 14:53:26.240902 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7g7g" event={"ID":"3a389bb4-d9de-42e4-911c-1a07358309b3","Type":"ContainerStarted","Data":"3f81bb08789404e82e902669b50736be3048d7835a6c22fd2dce8060ad7a7309"} Mar 12 14:53:28 crc kubenswrapper[4778]: I0312 14:53:28.557905 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:53:28 crc kubenswrapper[4778]: I0312 14:53:28.558616 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:53:29 crc kubenswrapper[4778]: I0312 14:53:29.268124 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7g7g" event={"ID":"3a389bb4-d9de-42e4-911c-1a07358309b3","Type":"ContainerStarted","Data":"98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df"} Mar 12 14:53:38 crc kubenswrapper[4778]: I0312 14:53:38.352783 4778 generic.go:334] "Generic (PLEG): container finished" podID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerID="98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df" exitCode=0 Mar 12 14:53:38 crc kubenswrapper[4778]: I0312 14:53:38.353000 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7g7g" event={"ID":"3a389bb4-d9de-42e4-911c-1a07358309b3","Type":"ContainerDied","Data":"98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df"} Mar 12 14:53:39 crc kubenswrapper[4778]: I0312 14:53:39.363708 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7g7g" event={"ID":"3a389bb4-d9de-42e4-911c-1a07358309b3","Type":"ContainerStarted","Data":"b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200"} Mar 12 14:53:39 crc kubenswrapper[4778]: I0312 14:53:39.399910 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7g7g" podStartSLOduration=1.8473961829999999 podStartE2EDuration="14.399889708s" podCreationTimestamp="2026-03-12 14:53:25 +0000 UTC" firstStartedPulling="2026-03-12 14:53:26.242282794 +0000 UTC m=+6224.690978180" lastFinishedPulling="2026-03-12 14:53:38.794776309 +0000 UTC m=+6237.243471705" observedRunningTime="2026-03-12 14:53:39.397994974 +0000 UTC m=+6237.846690370" watchObservedRunningTime="2026-03-12 14:53:39.399889708 +0000 UTC m=+6237.848585104" Mar 12 14:53:45 crc kubenswrapper[4778]: I0312 14:53:45.406990 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:45 crc kubenswrapper[4778]: I0312 14:53:45.407622 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:53:46 crc kubenswrapper[4778]: I0312 14:53:46.462431 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7g7g" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="registry-server" probeResult="failure" output=< Mar 12 14:53:46 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:53:46 crc kubenswrapper[4778]: > Mar 12 14:53:56 crc kubenswrapper[4778]: I0312 14:53:56.451556 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7g7g" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="registry-server" probeResult="failure" output=< Mar 12 14:53:56 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:53:56 crc kubenswrapper[4778]: > Mar 12 14:53:58 crc kubenswrapper[4778]: I0312 14:53:58.558307 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:53:58 crc kubenswrapper[4778]: I0312 14:53:58.558631 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:53:58 crc kubenswrapper[4778]: I0312 14:53:58.558685 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:53:58 crc kubenswrapper[4778]: I0312 14:53:58.559458 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9dc5323f20567a96d1ddcd61f28e57c1fb446407246116e9b85f41f7b862a79"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:53:58 crc kubenswrapper[4778]: I0312 14:53:58.559510 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://f9dc5323f20567a96d1ddcd61f28e57c1fb446407246116e9b85f41f7b862a79" gracePeriod=600 Mar 12 14:53:59 crc kubenswrapper[4778]: I0312 14:53:59.535582 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="f9dc5323f20567a96d1ddcd61f28e57c1fb446407246116e9b85f41f7b862a79" exitCode=0 Mar 12 14:53:59 crc kubenswrapper[4778]: I0312 14:53:59.535836 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"f9dc5323f20567a96d1ddcd61f28e57c1fb446407246116e9b85f41f7b862a79"} Mar 12 14:53:59 crc kubenswrapper[4778]: I0312 14:53:59.536147 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3"} Mar 12 14:53:59 crc kubenswrapper[4778]: I0312 14:53:59.536172 4778 scope.go:117] "RemoveContainer" containerID="e714113346a3db81a8ab4456acd91be95b7042ec696820890f89fb14190436c4" Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.144928 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555454-45jhq"] Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.147149 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555454-45jhq" Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.149142 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.149164 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.150222 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.159291 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555454-45jhq"] Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.249996 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l84cm\" (UniqueName: \"kubernetes.io/projected/3c838df0-ebc6-482a-8c4a-54e2650a121a-kube-api-access-l84cm\") pod \"auto-csr-approver-29555454-45jhq\" (UID: \"3c838df0-ebc6-482a-8c4a-54e2650a121a\") " pod="openshift-infra/auto-csr-approver-29555454-45jhq" Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.352372 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l84cm\" (UniqueName: \"kubernetes.io/projected/3c838df0-ebc6-482a-8c4a-54e2650a121a-kube-api-access-l84cm\") pod \"auto-csr-approver-29555454-45jhq\" (UID: \"3c838df0-ebc6-482a-8c4a-54e2650a121a\") " pod="openshift-infra/auto-csr-approver-29555454-45jhq" Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.374379 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l84cm\" (UniqueName: \"kubernetes.io/projected/3c838df0-ebc6-482a-8c4a-54e2650a121a-kube-api-access-l84cm\") pod \"auto-csr-approver-29555454-45jhq\" (UID: \"3c838df0-ebc6-482a-8c4a-54e2650a121a\") " pod="openshift-infra/auto-csr-approver-29555454-45jhq" Mar 12 14:54:00 crc kubenswrapper[4778]: I0312 14:54:00.467436 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555454-45jhq" Mar 12 14:54:01 crc kubenswrapper[4778]: I0312 14:54:01.003774 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555454-45jhq"] Mar 12 14:54:01 crc kubenswrapper[4778]: W0312 14:54:01.003802 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c838df0_ebc6_482a_8c4a_54e2650a121a.slice/crio-74a5f56be069b7c9eeb83716a3c1db85f0876c5d29e54cb7bf731b147dd7cb92 WatchSource:0}: Error finding container 74a5f56be069b7c9eeb83716a3c1db85f0876c5d29e54cb7bf731b147dd7cb92: Status 404 returned error can't find the container with id 74a5f56be069b7c9eeb83716a3c1db85f0876c5d29e54cb7bf731b147dd7cb92 Mar 12 14:54:01 crc kubenswrapper[4778]: I0312 14:54:01.566068 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555454-45jhq" event={"ID":"3c838df0-ebc6-482a-8c4a-54e2650a121a","Type":"ContainerStarted","Data":"74a5f56be069b7c9eeb83716a3c1db85f0876c5d29e54cb7bf731b147dd7cb92"} Mar 12 14:54:02 crc kubenswrapper[4778]: I0312 14:54:02.576280 4778 generic.go:334] "Generic (PLEG): container finished" podID="3c838df0-ebc6-482a-8c4a-54e2650a121a" containerID="a9ba0939d14aff103f5b46662e1f25d349a1d48a1eb2501077c92e0d8ad3aee1" exitCode=0 Mar 12 14:54:02 crc kubenswrapper[4778]: I0312 14:54:02.576353 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555454-45jhq" event={"ID":"3c838df0-ebc6-482a-8c4a-54e2650a121a","Type":"ContainerDied","Data":"a9ba0939d14aff103f5b46662e1f25d349a1d48a1eb2501077c92e0d8ad3aee1"} Mar 12 14:54:04 crc kubenswrapper[4778]: I0312 14:54:04.105684 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555454-45jhq" Mar 12 14:54:04 crc kubenswrapper[4778]: I0312 14:54:04.238599 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l84cm\" (UniqueName: \"kubernetes.io/projected/3c838df0-ebc6-482a-8c4a-54e2650a121a-kube-api-access-l84cm\") pod \"3c838df0-ebc6-482a-8c4a-54e2650a121a\" (UID: \"3c838df0-ebc6-482a-8c4a-54e2650a121a\") " Mar 12 14:54:04 crc kubenswrapper[4778]: I0312 14:54:04.252226 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c838df0-ebc6-482a-8c4a-54e2650a121a-kube-api-access-l84cm" (OuterVolumeSpecName: "kube-api-access-l84cm") pod "3c838df0-ebc6-482a-8c4a-54e2650a121a" (UID: "3c838df0-ebc6-482a-8c4a-54e2650a121a"). InnerVolumeSpecName "kube-api-access-l84cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:04 crc kubenswrapper[4778]: I0312 14:54:04.343066 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l84cm\" (UniqueName: \"kubernetes.io/projected/3c838df0-ebc6-482a-8c4a-54e2650a121a-kube-api-access-l84cm\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:04 crc kubenswrapper[4778]: I0312 14:54:04.599077 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555454-45jhq" event={"ID":"3c838df0-ebc6-482a-8c4a-54e2650a121a","Type":"ContainerDied","Data":"74a5f56be069b7c9eeb83716a3c1db85f0876c5d29e54cb7bf731b147dd7cb92"} Mar 12 14:54:04 crc kubenswrapper[4778]: I0312 14:54:04.599121 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a5f56be069b7c9eeb83716a3c1db85f0876c5d29e54cb7bf731b147dd7cb92" Mar 12 14:54:04 crc kubenswrapper[4778]: I0312 14:54:04.599214 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555454-45jhq" Mar 12 14:54:05 crc kubenswrapper[4778]: I0312 14:54:05.188361 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555448-g9lqj"] Mar 12 14:54:05 crc kubenswrapper[4778]: I0312 14:54:05.195839 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555448-g9lqj"] Mar 12 14:54:06 crc kubenswrapper[4778]: I0312 14:54:06.272482 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a65a66-de4e-413e-a175-4d12db4e3f26" path="/var/lib/kubelet/pods/53a65a66-de4e-413e-a175-4d12db4e3f26/volumes" Mar 12 14:54:06 crc kubenswrapper[4778]: I0312 14:54:06.469895 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7g7g" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="registry-server" probeResult="failure" output=< Mar 12 14:54:06 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 14:54:06 crc kubenswrapper[4778]: > Mar 12 14:54:15 crc kubenswrapper[4778]: I0312 14:54:15.459557 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:54:15 crc kubenswrapper[4778]: I0312 14:54:15.523894 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:54:15 crc kubenswrapper[4778]: I0312 14:54:15.716144 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7g7g"] Mar 12 14:54:16 crc kubenswrapper[4778]: I0312 14:54:16.728295 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7g7g" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="registry-server" containerID="cri-o://b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200" gracePeriod=2 Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.266599 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.316048 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-catalog-content\") pod \"3a389bb4-d9de-42e4-911c-1a07358309b3\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.316123 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdtk5\" (UniqueName: \"kubernetes.io/projected/3a389bb4-d9de-42e4-911c-1a07358309b3-kube-api-access-fdtk5\") pod \"3a389bb4-d9de-42e4-911c-1a07358309b3\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.316173 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-utilities\") pod \"3a389bb4-d9de-42e4-911c-1a07358309b3\" (UID: \"3a389bb4-d9de-42e4-911c-1a07358309b3\") " Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.317864 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-utilities" (OuterVolumeSpecName: "utilities") pod "3a389bb4-d9de-42e4-911c-1a07358309b3" (UID: "3a389bb4-d9de-42e4-911c-1a07358309b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.326544 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a389bb4-d9de-42e4-911c-1a07358309b3-kube-api-access-fdtk5" (OuterVolumeSpecName: "kube-api-access-fdtk5") pod "3a389bb4-d9de-42e4-911c-1a07358309b3" (UID: "3a389bb4-d9de-42e4-911c-1a07358309b3"). InnerVolumeSpecName "kube-api-access-fdtk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.419247 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdtk5\" (UniqueName: \"kubernetes.io/projected/3a389bb4-d9de-42e4-911c-1a07358309b3-kube-api-access-fdtk5\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.419291 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.454124 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a389bb4-d9de-42e4-911c-1a07358309b3" (UID: "3a389bb4-d9de-42e4-911c-1a07358309b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.520877 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a389bb4-d9de-42e4-911c-1a07358309b3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.745329 4778 generic.go:334] "Generic (PLEG): container finished" podID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerID="b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200" exitCode=0 Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.745410 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7g7g" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.745440 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7g7g" event={"ID":"3a389bb4-d9de-42e4-911c-1a07358309b3","Type":"ContainerDied","Data":"b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200"} Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.745825 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7g7g" event={"ID":"3a389bb4-d9de-42e4-911c-1a07358309b3","Type":"ContainerDied","Data":"3f81bb08789404e82e902669b50736be3048d7835a6c22fd2dce8060ad7a7309"} Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.745875 4778 scope.go:117] "RemoveContainer" containerID="b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.781805 4778 scope.go:117] "RemoveContainer" containerID="98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.805131 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7g7g"] Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.819784 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7g7g"] Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.849903 4778 scope.go:117] "RemoveContainer" containerID="9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.882327 4778 scope.go:117] "RemoveContainer" containerID="b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200" Mar 12 14:54:17 crc kubenswrapper[4778]: E0312 14:54:17.882917 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200\": container with ID starting with b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200 not found: ID does not exist" containerID="b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.882962 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200"} err="failed to get container status \"b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200\": rpc error: code = NotFound desc = could not find container \"b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200\": container with ID starting with b50b6859d478e73a4f11f0a0e6fc939da1da9c5141ae0e6e8fbde7c1628ea200 not found: ID does not exist" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.882995 4778 scope.go:117] "RemoveContainer" containerID="98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df" Mar 12 14:54:17 crc kubenswrapper[4778]: E0312 14:54:17.883800 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df\": container with ID starting with 98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df not found: ID does not exist" containerID="98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.883886 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df"} err="failed to get container status \"98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df\": rpc error: code = NotFound desc = could not find container \"98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df\": container with ID starting with 98ec30106f08de9786eaaf89b5f0ef4b60b88d5f013083bc3efa87082bbf30df not found: ID does not exist" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.883952 4778 scope.go:117] "RemoveContainer" containerID="9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73" Mar 12 14:54:17 crc kubenswrapper[4778]: E0312 14:54:17.884555 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73\": container with ID starting with 9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73 not found: ID does not exist" containerID="9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73" Mar 12 14:54:17 crc kubenswrapper[4778]: I0312 14:54:17.884599 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73"} err="failed to get container status \"9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73\": rpc error: code = NotFound desc = could not find container \"9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73\": container with ID starting with 9ed6fef4e5b3590168e0a6e17950a074fe47863fe6c84cee2e3d716bc4a91c73 not found: ID does not exist" Mar 12 14:54:18 crc kubenswrapper[4778]: I0312 14:54:18.265120 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" path="/var/lib/kubelet/pods/3a389bb4-d9de-42e4-911c-1a07358309b3/volumes" Mar 12 14:54:24 crc kubenswrapper[4778]: I0312 14:54:24.218000 4778 scope.go:117] "RemoveContainer" containerID="26a11a81934702ff4eaece8862eb99dd5a6954a851baea01b2b49d973eba34bc" Mar 12 14:55:58 crc kubenswrapper[4778]: I0312 14:55:58.557531 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:55:58 crc kubenswrapper[4778]: I0312 14:55:58.558071 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.146613 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555456-vtfp6"] Mar 12 14:56:00 crc kubenswrapper[4778]: E0312 14:56:00.147423 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="extract-utilities" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.147438 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="extract-utilities" Mar 12 14:56:00 crc kubenswrapper[4778]: E0312 14:56:00.147453 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="extract-content" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.147461 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="extract-content" Mar 12 14:56:00 crc kubenswrapper[4778]: E0312 14:56:00.147473 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="registry-server" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.147481 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="registry-server" Mar 12 14:56:00 crc kubenswrapper[4778]: E0312 14:56:00.147496 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c838df0-ebc6-482a-8c4a-54e2650a121a" containerName="oc" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.147503 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c838df0-ebc6-482a-8c4a-54e2650a121a" containerName="oc" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.147713 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c838df0-ebc6-482a-8c4a-54e2650a121a" containerName="oc" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.147735 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a389bb4-d9de-42e4-911c-1a07358309b3" containerName="registry-server" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.148453 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555456-vtfp6" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.152677 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.152998 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.153404 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.159848 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555456-vtfp6"] Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.289669 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp58k\" (UniqueName: \"kubernetes.io/projected/fc0add53-7611-4f91-bf0b-cf5fea5bb9d7-kube-api-access-sp58k\") pod \"auto-csr-approver-29555456-vtfp6\" (UID: \"fc0add53-7611-4f91-bf0b-cf5fea5bb9d7\") " pod="openshift-infra/auto-csr-approver-29555456-vtfp6" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.391820 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp58k\" (UniqueName: \"kubernetes.io/projected/fc0add53-7611-4f91-bf0b-cf5fea5bb9d7-kube-api-access-sp58k\") pod \"auto-csr-approver-29555456-vtfp6\" (UID: \"fc0add53-7611-4f91-bf0b-cf5fea5bb9d7\") " pod="openshift-infra/auto-csr-approver-29555456-vtfp6" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.408708 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp58k\" (UniqueName: \"kubernetes.io/projected/fc0add53-7611-4f91-bf0b-cf5fea5bb9d7-kube-api-access-sp58k\") pod \"auto-csr-approver-29555456-vtfp6\" (UID: \"fc0add53-7611-4f91-bf0b-cf5fea5bb9d7\") " pod="openshift-infra/auto-csr-approver-29555456-vtfp6" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.489224 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555456-vtfp6" Mar 12 14:56:00 crc kubenswrapper[4778]: I0312 14:56:00.957865 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555456-vtfp6"] Mar 12 14:56:00 crc kubenswrapper[4778]: W0312 14:56:00.964004 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc0add53_7611_4f91_bf0b_cf5fea5bb9d7.slice/crio-b0c4898d6bd30fc8c90f320f594da5113bb471287af8bcbfb72dd799afc629e0 WatchSource:0}: Error finding container b0c4898d6bd30fc8c90f320f594da5113bb471287af8bcbfb72dd799afc629e0: Status 404 returned error can't find the container with id b0c4898d6bd30fc8c90f320f594da5113bb471287af8bcbfb72dd799afc629e0 Mar 12 14:56:01 crc kubenswrapper[4778]: I0312 14:56:01.734984 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555456-vtfp6" event={"ID":"fc0add53-7611-4f91-bf0b-cf5fea5bb9d7","Type":"ContainerStarted","Data":"b0c4898d6bd30fc8c90f320f594da5113bb471287af8bcbfb72dd799afc629e0"} Mar 12 14:56:04 crc kubenswrapper[4778]: I0312 14:56:04.796397 4778 generic.go:334] "Generic (PLEG): container finished" podID="fc0add53-7611-4f91-bf0b-cf5fea5bb9d7" containerID="d7d76c5b2f5b6d4767497e4e99746de9373b74f615023933a47cba956a1bacb0" exitCode=0 Mar 12 14:56:04 crc kubenswrapper[4778]: I0312 14:56:04.796471 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555456-vtfp6" event={"ID":"fc0add53-7611-4f91-bf0b-cf5fea5bb9d7","Type":"ContainerDied","Data":"d7d76c5b2f5b6d4767497e4e99746de9373b74f615023933a47cba956a1bacb0"} Mar 12 14:56:06 crc kubenswrapper[4778]: I0312 14:56:06.209607 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555456-vtfp6" Mar 12 14:56:06 crc kubenswrapper[4778]: I0312 14:56:06.304848 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp58k\" (UniqueName: \"kubernetes.io/projected/fc0add53-7611-4f91-bf0b-cf5fea5bb9d7-kube-api-access-sp58k\") pod \"fc0add53-7611-4f91-bf0b-cf5fea5bb9d7\" (UID: \"fc0add53-7611-4f91-bf0b-cf5fea5bb9d7\") " Mar 12 14:56:06 crc kubenswrapper[4778]: I0312 14:56:06.310507 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc0add53-7611-4f91-bf0b-cf5fea5bb9d7-kube-api-access-sp58k" (OuterVolumeSpecName: "kube-api-access-sp58k") pod "fc0add53-7611-4f91-bf0b-cf5fea5bb9d7" (UID: "fc0add53-7611-4f91-bf0b-cf5fea5bb9d7"). InnerVolumeSpecName "kube-api-access-sp58k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:56:06 crc kubenswrapper[4778]: I0312 14:56:06.407598 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp58k\" (UniqueName: \"kubernetes.io/projected/fc0add53-7611-4f91-bf0b-cf5fea5bb9d7-kube-api-access-sp58k\") on node \"crc\" DevicePath \"\"" Mar 12 14:56:06 crc kubenswrapper[4778]: I0312 14:56:06.813868 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555456-vtfp6" event={"ID":"fc0add53-7611-4f91-bf0b-cf5fea5bb9d7","Type":"ContainerDied","Data":"b0c4898d6bd30fc8c90f320f594da5113bb471287af8bcbfb72dd799afc629e0"} Mar 12 14:56:06 crc kubenswrapper[4778]: I0312 14:56:06.813914 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0c4898d6bd30fc8c90f320f594da5113bb471287af8bcbfb72dd799afc629e0" Mar 12 14:56:06 crc kubenswrapper[4778]: I0312 14:56:06.813943 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555456-vtfp6" Mar 12 14:56:07 crc kubenswrapper[4778]: I0312 14:56:07.284380 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555450-wng5r"] Mar 12 14:56:07 crc kubenswrapper[4778]: I0312 14:56:07.293457 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555450-wng5r"] Mar 12 14:56:08 crc kubenswrapper[4778]: I0312 14:56:08.265236 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58bc575b-de62-42e8-8393-0cdebe8a1ec7" path="/var/lib/kubelet/pods/58bc575b-de62-42e8-8393-0cdebe8a1ec7/volumes" Mar 12 14:56:24 crc kubenswrapper[4778]: I0312 14:56:24.323342 4778 scope.go:117] "RemoveContainer" containerID="b673bf4baccca3cbc88953e0302f6d44002e09551d5876af0fec26f563392bf0" Mar 12 14:56:28 crc kubenswrapper[4778]: I0312 14:56:28.558109 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:56:28 crc kubenswrapper[4778]: I0312 14:56:28.558672 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:56:58 crc kubenswrapper[4778]: I0312 14:56:58.558079 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 14:56:58 crc kubenswrapper[4778]: I0312 14:56:58.558889 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 14:56:58 crc kubenswrapper[4778]: I0312 14:56:58.558972 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 14:56:58 crc kubenswrapper[4778]: I0312 14:56:58.560077 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 14:56:58 crc kubenswrapper[4778]: I0312 14:56:58.560140 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" gracePeriod=600 Mar 12 14:56:58 crc kubenswrapper[4778]: E0312 14:56:58.684224 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:56:59 crc kubenswrapper[4778]: I0312 14:56:59.296884 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" exitCode=0 Mar 12 14:56:59 crc kubenswrapper[4778]: I0312 14:56:59.296978 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3"} Mar 12 14:56:59 crc kubenswrapper[4778]: I0312 14:56:59.297225 4778 scope.go:117] "RemoveContainer" containerID="f9dc5323f20567a96d1ddcd61f28e57c1fb446407246116e9b85f41f7b862a79" Mar 12 14:56:59 crc kubenswrapper[4778]: I0312 14:56:59.297867 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:56:59 crc kubenswrapper[4778]: E0312 14:56:59.298213 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:57:10 crc kubenswrapper[4778]: I0312 14:57:10.255017 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:57:10 crc kubenswrapper[4778]: E0312 14:57:10.256085 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:57:22 crc kubenswrapper[4778]: I0312 14:57:22.407816 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:57:22 crc kubenswrapper[4778]: E0312 14:57:22.408628 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:57:34 crc kubenswrapper[4778]: I0312 14:57:34.253832 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:57:34 crc kubenswrapper[4778]: E0312 14:57:34.254645 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:57:48 crc kubenswrapper[4778]: I0312 14:57:48.254362 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:57:48 crc kubenswrapper[4778]: E0312 14:57:48.255139 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:57:55 crc kubenswrapper[4778]: I0312 14:57:55.774573 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h6mpk"] Mar 12 14:57:55 crc kubenswrapper[4778]: E0312 14:57:55.776264 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc0add53-7611-4f91-bf0b-cf5fea5bb9d7" containerName="oc" Mar 12 14:57:55 crc kubenswrapper[4778]: I0312 14:57:55.776295 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc0add53-7611-4f91-bf0b-cf5fea5bb9d7" containerName="oc" Mar 12 14:57:55 crc kubenswrapper[4778]: I0312 14:57:55.776735 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc0add53-7611-4f91-bf0b-cf5fea5bb9d7" containerName="oc" Mar 12 14:57:55 crc kubenswrapper[4778]: I0312 14:57:55.779342 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:55 crc kubenswrapper[4778]: I0312 14:57:55.792791 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6mpk"] Mar 12 14:57:55 crc kubenswrapper[4778]: I0312 14:57:55.921874 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-catalog-content\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:55 crc kubenswrapper[4778]: I0312 14:57:55.921955 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smnsf\" (UniqueName: \"kubernetes.io/projected/51704535-7590-40f1-8114-59f5032b1c86-kube-api-access-smnsf\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:55 crc kubenswrapper[4778]: I0312 14:57:55.922198 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-utilities\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.024207 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-catalog-content\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.024273 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smnsf\" (UniqueName: \"kubernetes.io/projected/51704535-7590-40f1-8114-59f5032b1c86-kube-api-access-smnsf\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.024378 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-utilities\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.024834 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-catalog-content\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.024904 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-utilities\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.047968 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smnsf\" (UniqueName: \"kubernetes.io/projected/51704535-7590-40f1-8114-59f5032b1c86-kube-api-access-smnsf\") pod \"certified-operators-h6mpk\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.112277 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.582279 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6mpk"] Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.911775 4778 generic.go:334] "Generic (PLEG): container finished" podID="51704535-7590-40f1-8114-59f5032b1c86" containerID="b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e" exitCode=0 Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.911822 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6mpk" event={"ID":"51704535-7590-40f1-8114-59f5032b1c86","Type":"ContainerDied","Data":"b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e"} Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.911846 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6mpk" event={"ID":"51704535-7590-40f1-8114-59f5032b1c86","Type":"ContainerStarted","Data":"4b7a7e0582fa56ba0004e62c821c92769140c78141a5756002ed142e7685d7aa"} Mar 12 14:57:56 crc kubenswrapper[4778]: I0312 14:57:56.914119 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 14:57:57 crc kubenswrapper[4778]: I0312 14:57:57.925283 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6mpk" event={"ID":"51704535-7590-40f1-8114-59f5032b1c86","Type":"ContainerStarted","Data":"5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835"} Mar 12 14:57:59 crc kubenswrapper[4778]: E0312 14:57:59.269809 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51704535_7590_40f1_8114_59f5032b1c86.slice/crio-5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835.scope\": RecentStats: unable to find data in memory cache]" Mar 12 14:57:59 crc kubenswrapper[4778]: I0312 14:57:59.947713 4778 generic.go:334] "Generic (PLEG): container finished" podID="51704535-7590-40f1-8114-59f5032b1c86" containerID="5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835" exitCode=0 Mar 12 14:57:59 crc kubenswrapper[4778]: I0312 14:57:59.947822 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6mpk" event={"ID":"51704535-7590-40f1-8114-59f5032b1c86","Type":"ContainerDied","Data":"5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835"} Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.151853 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555458-2kqth"] Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.154410 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555458-2kqth" Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.157590 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.158149 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.158599 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.162316 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555458-2kqth"] Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.230820 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp75b\" (UniqueName: \"kubernetes.io/projected/0fa9dd73-2656-43b3-a6cb-634d312a166e-kube-api-access-kp75b\") pod \"auto-csr-approver-29555458-2kqth\" (UID: \"0fa9dd73-2656-43b3-a6cb-634d312a166e\") " pod="openshift-infra/auto-csr-approver-29555458-2kqth" Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.333052 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp75b\" (UniqueName: \"kubernetes.io/projected/0fa9dd73-2656-43b3-a6cb-634d312a166e-kube-api-access-kp75b\") pod \"auto-csr-approver-29555458-2kqth\" (UID: \"0fa9dd73-2656-43b3-a6cb-634d312a166e\") " pod="openshift-infra/auto-csr-approver-29555458-2kqth" Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.366989 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp75b\" (UniqueName: \"kubernetes.io/projected/0fa9dd73-2656-43b3-a6cb-634d312a166e-kube-api-access-kp75b\") pod \"auto-csr-approver-29555458-2kqth\" (UID: \"0fa9dd73-2656-43b3-a6cb-634d312a166e\") " pod="openshift-infra/auto-csr-approver-29555458-2kqth" Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.477936 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555458-2kqth" Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.930507 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555458-2kqth"] Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.960123 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6mpk" event={"ID":"51704535-7590-40f1-8114-59f5032b1c86","Type":"ContainerStarted","Data":"3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59"} Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.961633 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555458-2kqth" event={"ID":"0fa9dd73-2656-43b3-a6cb-634d312a166e","Type":"ContainerStarted","Data":"51039247d67b2c4a116ad3728b96577d34ccb5f1f60aba63583d2068ea1b0883"} Mar 12 14:58:00 crc kubenswrapper[4778]: I0312 14:58:00.997357 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h6mpk" podStartSLOduration=2.5274186480000003 podStartE2EDuration="5.997341823s" podCreationTimestamp="2026-03-12 14:57:55 +0000 UTC" firstStartedPulling="2026-03-12 14:57:56.913813567 +0000 UTC m=+6495.362508963" lastFinishedPulling="2026-03-12 14:58:00.383736742 +0000 UTC m=+6498.832432138" observedRunningTime="2026-03-12 14:58:00.988604085 +0000 UTC m=+6499.437299481" watchObservedRunningTime="2026-03-12 14:58:00.997341823 +0000 UTC m=+6499.446037219" Mar 12 14:58:03 crc kubenswrapper[4778]: I0312 14:58:03.254543 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:58:03 crc kubenswrapper[4778]: E0312 14:58:03.255278 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:58:03 crc kubenswrapper[4778]: I0312 14:58:03.993660 4778 generic.go:334] "Generic (PLEG): container finished" podID="0fa9dd73-2656-43b3-a6cb-634d312a166e" containerID="a5663c78d0886a072205a20f2510ea67c65b15026159b43c8bf3ff0037ce7434" exitCode=0 Mar 12 14:58:03 crc kubenswrapper[4778]: I0312 14:58:03.993821 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555458-2kqth" event={"ID":"0fa9dd73-2656-43b3-a6cb-634d312a166e","Type":"ContainerDied","Data":"a5663c78d0886a072205a20f2510ea67c65b15026159b43c8bf3ff0037ce7434"} Mar 12 14:58:05 crc kubenswrapper[4778]: I0312 14:58:05.367751 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555458-2kqth" Mar 12 14:58:05 crc kubenswrapper[4778]: I0312 14:58:05.383479 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp75b\" (UniqueName: \"kubernetes.io/projected/0fa9dd73-2656-43b3-a6cb-634d312a166e-kube-api-access-kp75b\") pod \"0fa9dd73-2656-43b3-a6cb-634d312a166e\" (UID: \"0fa9dd73-2656-43b3-a6cb-634d312a166e\") " Mar 12 14:58:05 crc kubenswrapper[4778]: I0312 14:58:05.391513 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa9dd73-2656-43b3-a6cb-634d312a166e-kube-api-access-kp75b" (OuterVolumeSpecName: "kube-api-access-kp75b") pod "0fa9dd73-2656-43b3-a6cb-634d312a166e" (UID: "0fa9dd73-2656-43b3-a6cb-634d312a166e"). InnerVolumeSpecName "kube-api-access-kp75b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:58:05 crc kubenswrapper[4778]: I0312 14:58:05.484770 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp75b\" (UniqueName: \"kubernetes.io/projected/0fa9dd73-2656-43b3-a6cb-634d312a166e-kube-api-access-kp75b\") on node \"crc\" DevicePath \"\"" Mar 12 14:58:06 crc kubenswrapper[4778]: I0312 14:58:06.025998 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555458-2kqth" event={"ID":"0fa9dd73-2656-43b3-a6cb-634d312a166e","Type":"ContainerDied","Data":"51039247d67b2c4a116ad3728b96577d34ccb5f1f60aba63583d2068ea1b0883"} Mar 12 14:58:06 crc kubenswrapper[4778]: I0312 14:58:06.026059 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51039247d67b2c4a116ad3728b96577d34ccb5f1f60aba63583d2068ea1b0883" Mar 12 14:58:06 crc kubenswrapper[4778]: I0312 14:58:06.026108 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555458-2kqth" Mar 12 14:58:06 crc kubenswrapper[4778]: I0312 14:58:06.113687 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:58:06 crc kubenswrapper[4778]: I0312 14:58:06.115308 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:58:06 crc kubenswrapper[4778]: I0312 14:58:06.160566 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:58:06 crc kubenswrapper[4778]: I0312 14:58:06.434560 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555452-crwhx"] Mar 12 14:58:06 crc kubenswrapper[4778]: I0312 14:58:06.446651 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555452-crwhx"] Mar 12 14:58:07 crc kubenswrapper[4778]: I0312 14:58:07.091085 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:58:07 crc kubenswrapper[4778]: I0312 14:58:07.146797 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h6mpk"] Mar 12 14:58:08 crc kubenswrapper[4778]: I0312 14:58:08.267088 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3a8d80-262e-4c92-b07a-dcff65e0cd47" path="/var/lib/kubelet/pods/db3a8d80-262e-4c92-b07a-dcff65e0cd47/volumes" Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.052130 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h6mpk" podUID="51704535-7590-40f1-8114-59f5032b1c86" containerName="registry-server" containerID="cri-o://3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59" gracePeriod=2 Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.645998 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.773075 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-utilities\") pod \"51704535-7590-40f1-8114-59f5032b1c86\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.773132 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-catalog-content\") pod \"51704535-7590-40f1-8114-59f5032b1c86\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.773234 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smnsf\" (UniqueName: \"kubernetes.io/projected/51704535-7590-40f1-8114-59f5032b1c86-kube-api-access-smnsf\") pod \"51704535-7590-40f1-8114-59f5032b1c86\" (UID: \"51704535-7590-40f1-8114-59f5032b1c86\") " Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.773845 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-utilities" (OuterVolumeSpecName: "utilities") pod "51704535-7590-40f1-8114-59f5032b1c86" (UID: "51704535-7590-40f1-8114-59f5032b1c86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.781226 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51704535-7590-40f1-8114-59f5032b1c86-kube-api-access-smnsf" (OuterVolumeSpecName: "kube-api-access-smnsf") pod "51704535-7590-40f1-8114-59f5032b1c86" (UID: "51704535-7590-40f1-8114-59f5032b1c86"). InnerVolumeSpecName "kube-api-access-smnsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.849812 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51704535-7590-40f1-8114-59f5032b1c86" (UID: "51704535-7590-40f1-8114-59f5032b1c86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.875452 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.875487 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51704535-7590-40f1-8114-59f5032b1c86-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 14:58:09 crc kubenswrapper[4778]: I0312 14:58:09.875523 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smnsf\" (UniqueName: \"kubernetes.io/projected/51704535-7590-40f1-8114-59f5032b1c86-kube-api-access-smnsf\") on node \"crc\" DevicePath \"\"" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.062061 4778 generic.go:334] "Generic (PLEG): container finished" podID="51704535-7590-40f1-8114-59f5032b1c86" containerID="3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59" exitCode=0 Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.062132 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6mpk" event={"ID":"51704535-7590-40f1-8114-59f5032b1c86","Type":"ContainerDied","Data":"3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59"} Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.062156 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6mpk" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.062209 4778 scope.go:117] "RemoveContainer" containerID="3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.062349 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6mpk" event={"ID":"51704535-7590-40f1-8114-59f5032b1c86","Type":"ContainerDied","Data":"4b7a7e0582fa56ba0004e62c821c92769140c78141a5756002ed142e7685d7aa"} Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.090152 4778 scope.go:117] "RemoveContainer" containerID="5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.108599 4778 scope.go:117] "RemoveContainer" containerID="b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.112525 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h6mpk"] Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.121927 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h6mpk"] Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.150111 4778 scope.go:117] "RemoveContainer" containerID="3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59" Mar 12 14:58:10 crc kubenswrapper[4778]: E0312 14:58:10.150533 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59\": container with ID starting with 3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59 not found: ID does not exist" containerID="3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.150562 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59"} err="failed to get container status \"3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59\": rpc error: code = NotFound desc = could not find container \"3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59\": container with ID starting with 3a32f3e5d5134f200ee666a165f28c8a9bd2278b33b760b45d99e2265913bf59 not found: ID does not exist" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.150584 4778 scope.go:117] "RemoveContainer" containerID="5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835" Mar 12 14:58:10 crc kubenswrapper[4778]: E0312 14:58:10.150886 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835\": container with ID starting with 5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835 not found: ID does not exist" containerID="5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.150905 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835"} err="failed to get container status \"5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835\": rpc error: code = NotFound desc = could not find container \"5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835\": container with ID starting with 5c1140e2a3da443934a38be5417957e3a9e964721156d718e7e3eeb6dc137835 not found: ID does not exist" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.150917 4778 scope.go:117] "RemoveContainer" containerID="b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e" Mar 12 14:58:10 crc kubenswrapper[4778]: E0312 14:58:10.151209 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e\": container with ID starting with b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e not found: ID does not exist" containerID="b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.151265 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e"} err="failed to get container status \"b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e\": rpc error: code = NotFound desc = could not find container \"b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e\": container with ID starting with b419a433a52be5384840da5dbbba31c34114c87a32568aa1691306f3eab3966e not found: ID does not exist" Mar 12 14:58:10 crc kubenswrapper[4778]: I0312 14:58:10.266242 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51704535-7590-40f1-8114-59f5032b1c86" path="/var/lib/kubelet/pods/51704535-7590-40f1-8114-59f5032b1c86/volumes" Mar 12 14:58:18 crc kubenswrapper[4778]: I0312 14:58:18.253990 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:58:18 crc kubenswrapper[4778]: E0312 14:58:18.254478 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:58:24 crc kubenswrapper[4778]: I0312 14:58:24.419855 4778 scope.go:117] "RemoveContainer" containerID="2e201785308313aa155d17696c3a92cd860cbcfcbc51f75878f68248fd82d5d8" Mar 12 14:58:30 crc kubenswrapper[4778]: I0312 14:58:30.254973 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:58:30 crc kubenswrapper[4778]: E0312 14:58:30.256475 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:58:42 crc kubenswrapper[4778]: I0312 14:58:42.262287 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:58:42 crc kubenswrapper[4778]: E0312 14:58:42.263554 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:58:53 crc kubenswrapper[4778]: I0312 14:58:53.253986 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:58:53 crc kubenswrapper[4778]: E0312 14:58:53.254870 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:59:08 crc kubenswrapper[4778]: I0312 14:59:08.254418 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:59:08 crc kubenswrapper[4778]: E0312 14:59:08.255146 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:59:19 crc kubenswrapper[4778]: I0312 14:59:19.254774 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:59:19 crc kubenswrapper[4778]: E0312 14:59:19.255434 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:59:32 crc kubenswrapper[4778]: I0312 14:59:32.266598 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:59:32 crc kubenswrapper[4778]: E0312 14:59:32.267501 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:59:43 crc kubenswrapper[4778]: I0312 14:59:43.254130 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:59:43 crc kubenswrapper[4778]: E0312 14:59:43.254916 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 14:59:55 crc kubenswrapper[4778]: I0312 14:59:55.254437 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 14:59:55 crc kubenswrapper[4778]: E0312 14:59:55.255088 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.155283 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555460-6bwr2"] Mar 12 15:00:00 crc kubenswrapper[4778]: E0312 15:00:00.156341 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51704535-7590-40f1-8114-59f5032b1c86" containerName="registry-server" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.156357 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="51704535-7590-40f1-8114-59f5032b1c86" containerName="registry-server" Mar 12 15:00:00 crc kubenswrapper[4778]: E0312 15:00:00.156367 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51704535-7590-40f1-8114-59f5032b1c86" containerName="extract-content" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.156375 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="51704535-7590-40f1-8114-59f5032b1c86" containerName="extract-content" Mar 12 15:00:00 crc kubenswrapper[4778]: E0312 15:00:00.156390 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa9dd73-2656-43b3-a6cb-634d312a166e" containerName="oc" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.156397 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa9dd73-2656-43b3-a6cb-634d312a166e" containerName="oc" Mar 12 15:00:00 crc kubenswrapper[4778]: E0312 15:00:00.156416 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51704535-7590-40f1-8114-59f5032b1c86" containerName="extract-utilities" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.156425 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="51704535-7590-40f1-8114-59f5032b1c86" containerName="extract-utilities" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.156676 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa9dd73-2656-43b3-a6cb-634d312a166e" containerName="oc" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.156704 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="51704535-7590-40f1-8114-59f5032b1c86" containerName="registry-server" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.157498 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555460-6bwr2" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.160262 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.160469 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.160856 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.178486 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq"] Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.179969 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.184221 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.184447 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.208230 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555460-6bwr2"] Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.216314 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq"] Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.301617 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaa79f5-e391-4347-a4c3-c0a63518f540-secret-volume\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.301699 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaa79f5-e391-4347-a4c3-c0a63518f540-config-volume\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.302346 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5lx6\" (UniqueName: \"kubernetes.io/projected/7baca351-722e-4d7e-972e-04513fae6e0b-kube-api-access-t5lx6\") pod \"auto-csr-approver-29555460-6bwr2\" (UID: \"7baca351-722e-4d7e-972e-04513fae6e0b\") " pod="openshift-infra/auto-csr-approver-29555460-6bwr2" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.302456 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwlt\" (UniqueName: \"kubernetes.io/projected/faaa79f5-e391-4347-a4c3-c0a63518f540-kube-api-access-wjwlt\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.404214 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaa79f5-e391-4347-a4c3-c0a63518f540-secret-volume\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.404273 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaa79f5-e391-4347-a4c3-c0a63518f540-config-volume\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.404344 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5lx6\" (UniqueName: \"kubernetes.io/projected/7baca351-722e-4d7e-972e-04513fae6e0b-kube-api-access-t5lx6\") pod \"auto-csr-approver-29555460-6bwr2\" (UID: \"7baca351-722e-4d7e-972e-04513fae6e0b\") " pod="openshift-infra/auto-csr-approver-29555460-6bwr2" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.404505 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwlt\" (UniqueName: \"kubernetes.io/projected/faaa79f5-e391-4347-a4c3-c0a63518f540-kube-api-access-wjwlt\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.406914 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaa79f5-e391-4347-a4c3-c0a63518f540-config-volume\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.420659 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaa79f5-e391-4347-a4c3-c0a63518f540-secret-volume\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.437502 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwlt\" (UniqueName: \"kubernetes.io/projected/faaa79f5-e391-4347-a4c3-c0a63518f540-kube-api-access-wjwlt\") pod \"collect-profiles-29555460-t9qnq\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.437972 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5lx6\" (UniqueName: \"kubernetes.io/projected/7baca351-722e-4d7e-972e-04513fae6e0b-kube-api-access-t5lx6\") pod \"auto-csr-approver-29555460-6bwr2\" (UID: \"7baca351-722e-4d7e-972e-04513fae6e0b\") " pod="openshift-infra/auto-csr-approver-29555460-6bwr2" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.493108 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555460-6bwr2" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.506694 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:00 crc kubenswrapper[4778]: I0312 15:00:00.952862 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq"] Mar 12 15:00:01 crc kubenswrapper[4778]: I0312 15:00:01.038802 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555460-6bwr2"] Mar 12 15:00:01 crc kubenswrapper[4778]: W0312 15:00:01.053712 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7baca351_722e_4d7e_972e_04513fae6e0b.slice/crio-2b91083877674451481cea3ea78cfa01e88aafaff731fd1bd24142b45296aabd WatchSource:0}: Error finding container 2b91083877674451481cea3ea78cfa01e88aafaff731fd1bd24142b45296aabd: Status 404 returned error can't find the container with id 2b91083877674451481cea3ea78cfa01e88aafaff731fd1bd24142b45296aabd Mar 12 15:00:01 crc kubenswrapper[4778]: I0312 15:00:01.129268 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" event={"ID":"faaa79f5-e391-4347-a4c3-c0a63518f540","Type":"ContainerStarted","Data":"cc08a56b84f3ce3e074748729850a071bc430c114346dec2c3743a62ed94931b"} Mar 12 15:00:01 crc kubenswrapper[4778]: I0312 15:00:01.130598 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555460-6bwr2" event={"ID":"7baca351-722e-4d7e-972e-04513fae6e0b","Type":"ContainerStarted","Data":"2b91083877674451481cea3ea78cfa01e88aafaff731fd1bd24142b45296aabd"} Mar 12 15:00:02 crc kubenswrapper[4778]: I0312 15:00:02.140326 4778 generic.go:334] "Generic (PLEG): container finished" podID="faaa79f5-e391-4347-a4c3-c0a63518f540" containerID="3fe64b13554004be3fbf12b211482af1a85c6f10472ba77c6e1462c0d628fd9a" exitCode=0 Mar 12 15:00:02 crc kubenswrapper[4778]: I0312 15:00:02.140401 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" event={"ID":"faaa79f5-e391-4347-a4c3-c0a63518f540","Type":"ContainerDied","Data":"3fe64b13554004be3fbf12b211482af1a85c6f10472ba77c6e1462c0d628fd9a"} Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.506970 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.562741 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaa79f5-e391-4347-a4c3-c0a63518f540-config-volume\") pod \"faaa79f5-e391-4347-a4c3-c0a63518f540\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.562829 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaa79f5-e391-4347-a4c3-c0a63518f540-secret-volume\") pod \"faaa79f5-e391-4347-a4c3-c0a63518f540\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.562873 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjwlt\" (UniqueName: \"kubernetes.io/projected/faaa79f5-e391-4347-a4c3-c0a63518f540-kube-api-access-wjwlt\") pod \"faaa79f5-e391-4347-a4c3-c0a63518f540\" (UID: \"faaa79f5-e391-4347-a4c3-c0a63518f540\") " Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.563659 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faaa79f5-e391-4347-a4c3-c0a63518f540-config-volume" (OuterVolumeSpecName: "config-volume") pod "faaa79f5-e391-4347-a4c3-c0a63518f540" (UID: "faaa79f5-e391-4347-a4c3-c0a63518f540"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.568971 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faaa79f5-e391-4347-a4c3-c0a63518f540-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "faaa79f5-e391-4347-a4c3-c0a63518f540" (UID: "faaa79f5-e391-4347-a4c3-c0a63518f540"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.577456 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faaa79f5-e391-4347-a4c3-c0a63518f540-kube-api-access-wjwlt" (OuterVolumeSpecName: "kube-api-access-wjwlt") pod "faaa79f5-e391-4347-a4c3-c0a63518f540" (UID: "faaa79f5-e391-4347-a4c3-c0a63518f540"). InnerVolumeSpecName "kube-api-access-wjwlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.665658 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/faaa79f5-e391-4347-a4c3-c0a63518f540-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.665708 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/faaa79f5-e391-4347-a4c3-c0a63518f540-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:03 crc kubenswrapper[4778]: I0312 15:00:03.665728 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjwlt\" (UniqueName: \"kubernetes.io/projected/faaa79f5-e391-4347-a4c3-c0a63518f540-kube-api-access-wjwlt\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:04 crc kubenswrapper[4778]: I0312 15:00:04.159465 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" Mar 12 15:00:04 crc kubenswrapper[4778]: I0312 15:00:04.159567 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555460-t9qnq" event={"ID":"faaa79f5-e391-4347-a4c3-c0a63518f540","Type":"ContainerDied","Data":"cc08a56b84f3ce3e074748729850a071bc430c114346dec2c3743a62ed94931b"} Mar 12 15:00:04 crc kubenswrapper[4778]: I0312 15:00:04.159869 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc08a56b84f3ce3e074748729850a071bc430c114346dec2c3743a62ed94931b" Mar 12 15:00:04 crc kubenswrapper[4778]: I0312 15:00:04.579761 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r"] Mar 12 15:00:04 crc kubenswrapper[4778]: I0312 15:00:04.588485 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555415-jjk6r"] Mar 12 15:00:05 crc kubenswrapper[4778]: I0312 15:00:05.169084 4778 generic.go:334] "Generic (PLEG): container finished" podID="7baca351-722e-4d7e-972e-04513fae6e0b" containerID="64150eeb0f1f171e7d11ada7712192a8c533967a0e598d41c325a6422f027d7a" exitCode=0 Mar 12 15:00:05 crc kubenswrapper[4778]: I0312 15:00:05.169367 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555460-6bwr2" event={"ID":"7baca351-722e-4d7e-972e-04513fae6e0b","Type":"ContainerDied","Data":"64150eeb0f1f171e7d11ada7712192a8c533967a0e598d41c325a6422f027d7a"} Mar 12 15:00:06 crc kubenswrapper[4778]: I0312 15:00:06.281443 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6027ea-d1ed-4df0-bbe7-6904d2722fbc" path="/var/lib/kubelet/pods/9c6027ea-d1ed-4df0-bbe7-6904d2722fbc/volumes" Mar 12 15:00:06 crc kubenswrapper[4778]: I0312 15:00:06.540104 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555460-6bwr2" Mar 12 15:00:06 crc kubenswrapper[4778]: I0312 15:00:06.626536 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5lx6\" (UniqueName: \"kubernetes.io/projected/7baca351-722e-4d7e-972e-04513fae6e0b-kube-api-access-t5lx6\") pod \"7baca351-722e-4d7e-972e-04513fae6e0b\" (UID: \"7baca351-722e-4d7e-972e-04513fae6e0b\") " Mar 12 15:00:06 crc kubenswrapper[4778]: I0312 15:00:06.631817 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7baca351-722e-4d7e-972e-04513fae6e0b-kube-api-access-t5lx6" (OuterVolumeSpecName: "kube-api-access-t5lx6") pod "7baca351-722e-4d7e-972e-04513fae6e0b" (UID: "7baca351-722e-4d7e-972e-04513fae6e0b"). InnerVolumeSpecName "kube-api-access-t5lx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:00:06 crc kubenswrapper[4778]: I0312 15:00:06.727794 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5lx6\" (UniqueName: \"kubernetes.io/projected/7baca351-722e-4d7e-972e-04513fae6e0b-kube-api-access-t5lx6\") on node \"crc\" DevicePath \"\"" Mar 12 15:00:07 crc kubenswrapper[4778]: I0312 15:00:07.193246 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555460-6bwr2" event={"ID":"7baca351-722e-4d7e-972e-04513fae6e0b","Type":"ContainerDied","Data":"2b91083877674451481cea3ea78cfa01e88aafaff731fd1bd24142b45296aabd"} Mar 12 15:00:07 crc kubenswrapper[4778]: I0312 15:00:07.193285 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555460-6bwr2" Mar 12 15:00:07 crc kubenswrapper[4778]: I0312 15:00:07.193289 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b91083877674451481cea3ea78cfa01e88aafaff731fd1bd24142b45296aabd" Mar 12 15:00:07 crc kubenswrapper[4778]: I0312 15:00:07.597322 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555454-45jhq"] Mar 12 15:00:07 crc kubenswrapper[4778]: I0312 15:00:07.604845 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555454-45jhq"] Mar 12 15:00:08 crc kubenswrapper[4778]: I0312 15:00:08.254548 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:00:08 crc kubenswrapper[4778]: E0312 15:00:08.254845 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:00:08 crc kubenswrapper[4778]: I0312 15:00:08.264119 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c838df0-ebc6-482a-8c4a-54e2650a121a" path="/var/lib/kubelet/pods/3c838df0-ebc6-482a-8c4a-54e2650a121a/volumes" Mar 12 15:00:23 crc kubenswrapper[4778]: I0312 15:00:23.254875 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:00:23 crc kubenswrapper[4778]: E0312 15:00:23.256007 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:00:24 crc kubenswrapper[4778]: I0312 15:00:24.537992 4778 scope.go:117] "RemoveContainer" containerID="a9ba0939d14aff103f5b46662e1f25d349a1d48a1eb2501077c92e0d8ad3aee1" Mar 12 15:00:24 crc kubenswrapper[4778]: I0312 15:00:24.611250 4778 scope.go:117] "RemoveContainer" containerID="e47d44b34f9f52eb0c1249aedb361a64e96dcc50294b7036054124a9fc860b25" Mar 12 15:00:35 crc kubenswrapper[4778]: I0312 15:00:35.254646 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:00:35 crc kubenswrapper[4778]: E0312 15:00:35.255476 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:00:46 crc kubenswrapper[4778]: I0312 15:00:46.254999 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:00:46 crc kubenswrapper[4778]: E0312 15:00:46.256137 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:00:57 crc kubenswrapper[4778]: I0312 15:00:57.254835 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:00:57 crc kubenswrapper[4778]: E0312 15:00:57.255739 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.157482 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29555461-lmqk9"] Mar 12 15:01:00 crc kubenswrapper[4778]: E0312 15:01:00.158745 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faaa79f5-e391-4347-a4c3-c0a63518f540" containerName="collect-profiles" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.158760 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="faaa79f5-e391-4347-a4c3-c0a63518f540" containerName="collect-profiles" Mar 12 15:01:00 crc kubenswrapper[4778]: E0312 15:01:00.158780 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7baca351-722e-4d7e-972e-04513fae6e0b" containerName="oc" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.158786 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="7baca351-722e-4d7e-972e-04513fae6e0b" containerName="oc" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.158949 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="7baca351-722e-4d7e-972e-04513fae6e0b" containerName="oc" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.158970 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="faaa79f5-e391-4347-a4c3-c0a63518f540" containerName="collect-profiles" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.159583 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.178801 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-fernet-keys\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.178974 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-combined-ca-bundle\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.179013 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lls6t\" (UniqueName: \"kubernetes.io/projected/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-kube-api-access-lls6t\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.179112 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-config-data\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.226912 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29555461-lmqk9"] Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.280583 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lls6t\" (UniqueName: \"kubernetes.io/projected/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-kube-api-access-lls6t\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.281004 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-config-data\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.281277 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-fernet-keys\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.281682 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-combined-ca-bundle\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.286949 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-fernet-keys\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.287704 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-config-data\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.288530 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-combined-ca-bundle\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.304862 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lls6t\" (UniqueName: \"kubernetes.io/projected/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-kube-api-access-lls6t\") pod \"keystone-cron-29555461-lmqk9\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.481044 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:00 crc kubenswrapper[4778]: I0312 15:01:00.967277 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29555461-lmqk9"] Mar 12 15:01:01 crc kubenswrapper[4778]: I0312 15:01:01.749431 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555461-lmqk9" event={"ID":"ebdf3274-70cb-4083-bf12-5d1038a9b7ba","Type":"ContainerStarted","Data":"01dbc6ac7066fa5ead67e636626486ffc63409dac8a4cd6a20c003a2abfad4ff"} Mar 12 15:01:01 crc kubenswrapper[4778]: I0312 15:01:01.749741 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555461-lmqk9" event={"ID":"ebdf3274-70cb-4083-bf12-5d1038a9b7ba","Type":"ContainerStarted","Data":"f482217845d50b2884bd5bd48ad61af1e7768b9ee05ce69facabd403638440d6"} Mar 12 15:01:04 crc kubenswrapper[4778]: E0312 15:01:04.038076 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebdf3274_70cb_4083_bf12_5d1038a9b7ba.slice/crio-conmon-01dbc6ac7066fa5ead67e636626486ffc63409dac8a4cd6a20c003a2abfad4ff.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:01:04 crc kubenswrapper[4778]: I0312 15:01:04.800276 4778 generic.go:334] "Generic (PLEG): container finished" podID="ebdf3274-70cb-4083-bf12-5d1038a9b7ba" containerID="01dbc6ac7066fa5ead67e636626486ffc63409dac8a4cd6a20c003a2abfad4ff" exitCode=0 Mar 12 15:01:04 crc kubenswrapper[4778]: I0312 15:01:04.800423 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555461-lmqk9" event={"ID":"ebdf3274-70cb-4083-bf12-5d1038a9b7ba","Type":"ContainerDied","Data":"01dbc6ac7066fa5ead67e636626486ffc63409dac8a4cd6a20c003a2abfad4ff"} Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.206905 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.305099 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-config-data\") pod \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.305232 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-combined-ca-bundle\") pod \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.305270 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lls6t\" (UniqueName: \"kubernetes.io/projected/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-kube-api-access-lls6t\") pod \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.305318 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-fernet-keys\") pod \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\" (UID: \"ebdf3274-70cb-4083-bf12-5d1038a9b7ba\") " Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.310260 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-kube-api-access-lls6t" (OuterVolumeSpecName: "kube-api-access-lls6t") pod "ebdf3274-70cb-4083-bf12-5d1038a9b7ba" (UID: "ebdf3274-70cb-4083-bf12-5d1038a9b7ba"). InnerVolumeSpecName "kube-api-access-lls6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.311820 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ebdf3274-70cb-4083-bf12-5d1038a9b7ba" (UID: "ebdf3274-70cb-4083-bf12-5d1038a9b7ba"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.345159 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebdf3274-70cb-4083-bf12-5d1038a9b7ba" (UID: "ebdf3274-70cb-4083-bf12-5d1038a9b7ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.366424 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-config-data" (OuterVolumeSpecName: "config-data") pod "ebdf3274-70cb-4083-bf12-5d1038a9b7ba" (UID: "ebdf3274-70cb-4083-bf12-5d1038a9b7ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.407718 4778 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.407757 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.407769 4778 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.407786 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lls6t\" (UniqueName: \"kubernetes.io/projected/ebdf3274-70cb-4083-bf12-5d1038a9b7ba-kube-api-access-lls6t\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.822519 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29555461-lmqk9" event={"ID":"ebdf3274-70cb-4083-bf12-5d1038a9b7ba","Type":"ContainerDied","Data":"f482217845d50b2884bd5bd48ad61af1e7768b9ee05ce69facabd403638440d6"} Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.822888 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f482217845d50b2884bd5bd48ad61af1e7768b9ee05ce69facabd403638440d6" Mar 12 15:01:06 crc kubenswrapper[4778]: I0312 15:01:06.822562 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29555461-lmqk9" Mar 12 15:01:08 crc kubenswrapper[4778]: I0312 15:01:08.253179 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:01:08 crc kubenswrapper[4778]: E0312 15:01:08.253533 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:01:19 crc kubenswrapper[4778]: I0312 15:01:19.255047 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:01:19 crc kubenswrapper[4778]: E0312 15:01:19.258208 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:01:31 crc kubenswrapper[4778]: I0312 15:01:31.253781 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:01:31 crc kubenswrapper[4778]: E0312 15:01:31.254638 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:01:32 crc kubenswrapper[4778]: I0312 15:01:32.125150 4778 generic.go:334] "Generic (PLEG): container finished" podID="74897d0a-ca7b-4589-bd4c-75910c2d491c" containerID="04824fe8df9ecfce713c8136bfb0516b3d49f4264b49ad91474ebd09ae740d91" exitCode=0 Mar 12 15:01:32 crc kubenswrapper[4778]: I0312 15:01:32.125242 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"74897d0a-ca7b-4589-bd4c-75910c2d491c","Type":"ContainerDied","Data":"04824fe8df9ecfce713c8136bfb0516b3d49f4264b49ad91474ebd09ae740d91"} Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.864755 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.952980 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config-secret\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.953091 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ssdw\" (UniqueName: \"kubernetes.io/projected/74897d0a-ca7b-4589-bd4c-75910c2d491c-kube-api-access-4ssdw\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.953132 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ssh-key\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.953172 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-workdir\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.953235 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-config-data\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.953293 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.953419 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.953529 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-temporary\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.953590 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ca-certs\") pod \"74897d0a-ca7b-4589-bd4c-75910c2d491c\" (UID: \"74897d0a-ca7b-4589-bd4c-75910c2d491c\") " Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.955227 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-config-data" (OuterVolumeSpecName: "config-data") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.955427 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.959093 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.959460 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74897d0a-ca7b-4589-bd4c-75910c2d491c-kube-api-access-4ssdw" (OuterVolumeSpecName: "kube-api-access-4ssdw") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "kube-api-access-4ssdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.964140 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.983963 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.994030 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:01:33 crc kubenswrapper[4778]: I0312 15:01:33.998360 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.035023 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "74897d0a-ca7b-4589-bd4c-75910c2d491c" (UID: "74897d0a-ca7b-4589-bd4c-75910c2d491c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056631 4778 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056676 4778 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056692 4778 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056706 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ssdw\" (UniqueName: \"kubernetes.io/projected/74897d0a-ca7b-4589-bd4c-75910c2d491c-kube-api-access-4ssdw\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056718 4778 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74897d0a-ca7b-4589-bd4c-75910c2d491c-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056731 4778 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/74897d0a-ca7b-4589-bd4c-75910c2d491c-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056745 4778 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-config-data\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056758 4778 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/74897d0a-ca7b-4589-bd4c-75910c2d491c-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.056800 4778 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.086885 4778 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.155371 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"74897d0a-ca7b-4589-bd4c-75910c2d491c","Type":"ContainerDied","Data":"454ca901956127a4048551d166d33c00269e2d8a18f508b4b327654529c385c0"} Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.155423 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.155427 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="454ca901956127a4048551d166d33c00269e2d8a18f508b4b327654529c385c0" Mar 12 15:01:34 crc kubenswrapper[4778]: I0312 15:01:34.158288 4778 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.768727 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 15:01:39 crc kubenswrapper[4778]: E0312 15:01:39.769936 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74897d0a-ca7b-4589-bd4c-75910c2d491c" containerName="tempest-tests-tempest-tests-runner" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.769955 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="74897d0a-ca7b-4589-bd4c-75910c2d491c" containerName="tempest-tests-tempest-tests-runner" Mar 12 15:01:39 crc kubenswrapper[4778]: E0312 15:01:39.769977 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebdf3274-70cb-4083-bf12-5d1038a9b7ba" containerName="keystone-cron" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.769986 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebdf3274-70cb-4083-bf12-5d1038a9b7ba" containerName="keystone-cron" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.770234 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebdf3274-70cb-4083-bf12-5d1038a9b7ba" containerName="keystone-cron" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.770258 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="74897d0a-ca7b-4589-bd4c-75910c2d491c" containerName="tempest-tests-tempest-tests-runner" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.771026 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.773970 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-s8dkq" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.782698 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.882264 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jtd4\" (UniqueName: \"kubernetes.io/projected/82246f69-2112-44e9-a783-a4a5926188b4-kube-api-access-2jtd4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82246f69-2112-44e9-a783-a4a5926188b4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.882331 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82246f69-2112-44e9-a783-a4a5926188b4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.985476 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jtd4\" (UniqueName: \"kubernetes.io/projected/82246f69-2112-44e9-a783-a4a5926188b4-kube-api-access-2jtd4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82246f69-2112-44e9-a783-a4a5926188b4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.985607 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82246f69-2112-44e9-a783-a4a5926188b4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:39 crc kubenswrapper[4778]: I0312 15:01:39.986313 4778 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82246f69-2112-44e9-a783-a4a5926188b4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:40 crc kubenswrapper[4778]: I0312 15:01:40.012311 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jtd4\" (UniqueName: \"kubernetes.io/projected/82246f69-2112-44e9-a783-a4a5926188b4-kube-api-access-2jtd4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82246f69-2112-44e9-a783-a4a5926188b4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:40 crc kubenswrapper[4778]: I0312 15:01:40.036125 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82246f69-2112-44e9-a783-a4a5926188b4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:40 crc kubenswrapper[4778]: I0312 15:01:40.115741 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 12 15:01:40 crc kubenswrapper[4778]: I0312 15:01:40.607932 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 12 15:01:41 crc kubenswrapper[4778]: I0312 15:01:41.234577 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"82246f69-2112-44e9-a783-a4a5926188b4","Type":"ContainerStarted","Data":"86122024dcb612e716c8156c8914ce0d795dee3075f7fd0fe85a17b803420332"} Mar 12 15:01:42 crc kubenswrapper[4778]: I0312 15:01:42.243985 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"82246f69-2112-44e9-a783-a4a5926188b4","Type":"ContainerStarted","Data":"6a1da5a5609d7d84e85854d47472704f89202cab80268f35990e8e18b239063f"} Mar 12 15:01:42 crc kubenswrapper[4778]: I0312 15:01:42.270343 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.9905026289999999 podStartE2EDuration="3.270316627s" podCreationTimestamp="2026-03-12 15:01:39 +0000 UTC" firstStartedPulling="2026-03-12 15:01:40.607062704 +0000 UTC m=+6719.055758100" lastFinishedPulling="2026-03-12 15:01:41.886876692 +0000 UTC m=+6720.335572098" observedRunningTime="2026-03-12 15:01:42.264820341 +0000 UTC m=+6720.713515737" watchObservedRunningTime="2026-03-12 15:01:42.270316627 +0000 UTC m=+6720.719012063" Mar 12 15:01:44 crc kubenswrapper[4778]: I0312 15:01:44.254308 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:01:44 crc kubenswrapper[4778]: E0312 15:01:44.255228 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:01:57 crc kubenswrapper[4778]: I0312 15:01:57.255528 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:01:57 crc kubenswrapper[4778]: E0312 15:01:57.256339 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.162589 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555462-dvtsm"] Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.165693 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555462-dvtsm" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.169898 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.170564 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.170882 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.191259 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555462-dvtsm"] Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.243969 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/cbb23378-6e3e-4c63-919d-47ce1d17dd7b-kube-api-access-nxkr6\") pod \"auto-csr-approver-29555462-dvtsm\" (UID: \"cbb23378-6e3e-4c63-919d-47ce1d17dd7b\") " pod="openshift-infra/auto-csr-approver-29555462-dvtsm" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.346048 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/cbb23378-6e3e-4c63-919d-47ce1d17dd7b-kube-api-access-nxkr6\") pod \"auto-csr-approver-29555462-dvtsm\" (UID: \"cbb23378-6e3e-4c63-919d-47ce1d17dd7b\") " pod="openshift-infra/auto-csr-approver-29555462-dvtsm" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.366292 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/cbb23378-6e3e-4c63-919d-47ce1d17dd7b-kube-api-access-nxkr6\") pod \"auto-csr-approver-29555462-dvtsm\" (UID: \"cbb23378-6e3e-4c63-919d-47ce1d17dd7b\") " pod="openshift-infra/auto-csr-approver-29555462-dvtsm" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.500270 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555462-dvtsm" Mar 12 15:02:00 crc kubenswrapper[4778]: I0312 15:02:00.976633 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555462-dvtsm"] Mar 12 15:02:01 crc kubenswrapper[4778]: I0312 15:02:01.438876 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555462-dvtsm" event={"ID":"cbb23378-6e3e-4c63-919d-47ce1d17dd7b","Type":"ContainerStarted","Data":"deaecc829dedc237b70dc46e8c5b40e55230025755e171dfd88c21e6975391c5"} Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.402456 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rkpvq/must-gather-6d9ls"] Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.405003 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.406915 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rkpvq"/"default-dockercfg-fscgd" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.407345 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rkpvq"/"kube-root-ca.crt" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.410601 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rkpvq"/"openshift-service-ca.crt" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.415034 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rkpvq/must-gather-6d9ls"] Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.459440 4778 generic.go:334] "Generic (PLEG): container finished" podID="cbb23378-6e3e-4c63-919d-47ce1d17dd7b" containerID="bc1c69d732ac8380ce4ad84b76897a91373ec3edde2343f57d27f4105f4594eb" exitCode=0 Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.459481 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555462-dvtsm" event={"ID":"cbb23378-6e3e-4c63-919d-47ce1d17dd7b","Type":"ContainerDied","Data":"bc1c69d732ac8380ce4ad84b76897a91373ec3edde2343f57d27f4105f4594eb"} Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.510323 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twzlp\" (UniqueName: \"kubernetes.io/projected/dd2baa0b-6680-41af-8231-e30368cb0090-kube-api-access-twzlp\") pod \"must-gather-6d9ls\" (UID: \"dd2baa0b-6680-41af-8231-e30368cb0090\") " pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.510513 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd2baa0b-6680-41af-8231-e30368cb0090-must-gather-output\") pod \"must-gather-6d9ls\" (UID: \"dd2baa0b-6680-41af-8231-e30368cb0090\") " pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.612263 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd2baa0b-6680-41af-8231-e30368cb0090-must-gather-output\") pod \"must-gather-6d9ls\" (UID: \"dd2baa0b-6680-41af-8231-e30368cb0090\") " pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.612342 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twzlp\" (UniqueName: \"kubernetes.io/projected/dd2baa0b-6680-41af-8231-e30368cb0090-kube-api-access-twzlp\") pod \"must-gather-6d9ls\" (UID: \"dd2baa0b-6680-41af-8231-e30368cb0090\") " pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.612759 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd2baa0b-6680-41af-8231-e30368cb0090-must-gather-output\") pod \"must-gather-6d9ls\" (UID: \"dd2baa0b-6680-41af-8231-e30368cb0090\") " pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.629707 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twzlp\" (UniqueName: \"kubernetes.io/projected/dd2baa0b-6680-41af-8231-e30368cb0090-kube-api-access-twzlp\") pod \"must-gather-6d9ls\" (UID: \"dd2baa0b-6680-41af-8231-e30368cb0090\") " pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:02:03 crc kubenswrapper[4778]: I0312 15:02:03.725624 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:02:04 crc kubenswrapper[4778]: I0312 15:02:04.232897 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rkpvq/must-gather-6d9ls"] Mar 12 15:02:04 crc kubenswrapper[4778]: I0312 15:02:04.470351 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" event={"ID":"dd2baa0b-6680-41af-8231-e30368cb0090","Type":"ContainerStarted","Data":"49571a492d3a83d3d165f3c9920027fe7d0f5cf5c45cb08c90ad79451a4a973a"} Mar 12 15:02:04 crc kubenswrapper[4778]: I0312 15:02:04.842144 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555462-dvtsm" Mar 12 15:02:04 crc kubenswrapper[4778]: I0312 15:02:04.938020 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/cbb23378-6e3e-4c63-919d-47ce1d17dd7b-kube-api-access-nxkr6\") pod \"cbb23378-6e3e-4c63-919d-47ce1d17dd7b\" (UID: \"cbb23378-6e3e-4c63-919d-47ce1d17dd7b\") " Mar 12 15:02:04 crc kubenswrapper[4778]: I0312 15:02:04.945196 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb23378-6e3e-4c63-919d-47ce1d17dd7b-kube-api-access-nxkr6" (OuterVolumeSpecName: "kube-api-access-nxkr6") pod "cbb23378-6e3e-4c63-919d-47ce1d17dd7b" (UID: "cbb23378-6e3e-4c63-919d-47ce1d17dd7b"). InnerVolumeSpecName "kube-api-access-nxkr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:02:05 crc kubenswrapper[4778]: I0312 15:02:05.040689 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxkr6\" (UniqueName: \"kubernetes.io/projected/cbb23378-6e3e-4c63-919d-47ce1d17dd7b-kube-api-access-nxkr6\") on node \"crc\" DevicePath \"\"" Mar 12 15:02:05 crc kubenswrapper[4778]: I0312 15:02:05.512636 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555462-dvtsm" event={"ID":"cbb23378-6e3e-4c63-919d-47ce1d17dd7b","Type":"ContainerDied","Data":"deaecc829dedc237b70dc46e8c5b40e55230025755e171dfd88c21e6975391c5"} Mar 12 15:02:05 crc kubenswrapper[4778]: I0312 15:02:05.512714 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deaecc829dedc237b70dc46e8c5b40e55230025755e171dfd88c21e6975391c5" Mar 12 15:02:05 crc kubenswrapper[4778]: I0312 15:02:05.512717 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555462-dvtsm" Mar 12 15:02:05 crc kubenswrapper[4778]: E0312 15:02:05.724908 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbb23378_6e3e_4c63_919d_47ce1d17dd7b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbb23378_6e3e_4c63_919d_47ce1d17dd7b.slice/crio-deaecc829dedc237b70dc46e8c5b40e55230025755e171dfd88c21e6975391c5\": RecentStats: unable to find data in memory cache]" Mar 12 15:02:05 crc kubenswrapper[4778]: I0312 15:02:05.917455 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555456-vtfp6"] Mar 12 15:02:05 crc kubenswrapper[4778]: I0312 15:02:05.928448 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555456-vtfp6"] Mar 12 15:02:06 crc kubenswrapper[4778]: I0312 15:02:06.265615 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc0add53-7611-4f91-bf0b-cf5fea5bb9d7" path="/var/lib/kubelet/pods/fc0add53-7611-4f91-bf0b-cf5fea5bb9d7/volumes" Mar 12 15:02:08 crc kubenswrapper[4778]: I0312 15:02:08.254713 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:02:10 crc kubenswrapper[4778]: I0312 15:02:10.561627 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" event={"ID":"dd2baa0b-6680-41af-8231-e30368cb0090","Type":"ContainerStarted","Data":"099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb"} Mar 12 15:02:10 crc kubenswrapper[4778]: I0312 15:02:10.563402 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"009e612c3693545ba4a1988aa00993d05612427ec6eb485b08b455b35968f1ab"} Mar 12 15:02:11 crc kubenswrapper[4778]: I0312 15:02:11.574428 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" event={"ID":"dd2baa0b-6680-41af-8231-e30368cb0090","Type":"ContainerStarted","Data":"d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b"} Mar 12 15:02:11 crc kubenswrapper[4778]: I0312 15:02:11.595226 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" podStartSLOduration=2.649630095 podStartE2EDuration="8.595205017s" podCreationTimestamp="2026-03-12 15:02:03 +0000 UTC" firstStartedPulling="2026-03-12 15:02:04.234699723 +0000 UTC m=+6742.683395119" lastFinishedPulling="2026-03-12 15:02:10.180274625 +0000 UTC m=+6748.628970041" observedRunningTime="2026-03-12 15:02:11.587812166 +0000 UTC m=+6750.036507562" watchObservedRunningTime="2026-03-12 15:02:11.595205017 +0000 UTC m=+6750.043900413" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.272817 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-2n5vv"] Mar 12 15:02:15 crc kubenswrapper[4778]: E0312 15:02:15.273545 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb23378-6e3e-4c63-919d-47ce1d17dd7b" containerName="oc" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.273560 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb23378-6e3e-4c63-919d-47ce1d17dd7b" containerName="oc" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.273769 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb23378-6e3e-4c63-919d-47ce1d17dd7b" containerName="oc" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.274357 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.370794 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-host\") pod \"crc-debug-2n5vv\" (UID: \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\") " pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.370883 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9m8w\" (UniqueName: \"kubernetes.io/projected/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-kube-api-access-d9m8w\") pod \"crc-debug-2n5vv\" (UID: \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\") " pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.473365 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9m8w\" (UniqueName: \"kubernetes.io/projected/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-kube-api-access-d9m8w\") pod \"crc-debug-2n5vv\" (UID: \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\") " pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.473726 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-host\") pod \"crc-debug-2n5vv\" (UID: \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\") " pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.473827 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-host\") pod \"crc-debug-2n5vv\" (UID: \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\") " pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.497196 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9m8w\" (UniqueName: \"kubernetes.io/projected/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-kube-api-access-d9m8w\") pod \"crc-debug-2n5vv\" (UID: \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\") " pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:02:15 crc kubenswrapper[4778]: I0312 15:02:15.594967 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:02:16 crc kubenswrapper[4778]: I0312 15:02:16.623680 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" event={"ID":"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657","Type":"ContainerStarted","Data":"94f570da92f9fd8977fa9f3cbc8d1dfd2eab1360e49ff3877e319f0a0fea4cfa"} Mar 12 15:02:24 crc kubenswrapper[4778]: I0312 15:02:24.705914 4778 scope.go:117] "RemoveContainer" containerID="d7d76c5b2f5b6d4767497e4e99746de9373b74f615023933a47cba956a1bacb0" Mar 12 15:02:26 crc kubenswrapper[4778]: I0312 15:02:26.714522 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" event={"ID":"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657","Type":"ContainerStarted","Data":"32e69f17a15da926e453ec4388e2482d274516709eb37f6124496feae6a6509f"} Mar 12 15:02:26 crc kubenswrapper[4778]: I0312 15:02:26.728553 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" podStartSLOduration=1.746772557 podStartE2EDuration="11.728538109s" podCreationTimestamp="2026-03-12 15:02:15 +0000 UTC" firstStartedPulling="2026-03-12 15:02:15.657244171 +0000 UTC m=+6754.105939587" lastFinishedPulling="2026-03-12 15:02:25.639009743 +0000 UTC m=+6764.087705139" observedRunningTime="2026-03-12 15:02:26.728504578 +0000 UTC m=+6765.177199974" watchObservedRunningTime="2026-03-12 15:02:26.728538109 +0000 UTC m=+6765.177233505" Mar 12 15:02:42 crc kubenswrapper[4778]: I0312 15:02:42.847810 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h8jzw"] Mar 12 15:02:42 crc kubenswrapper[4778]: I0312 15:02:42.850714 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:42 crc kubenswrapper[4778]: I0312 15:02:42.868220 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8jzw"] Mar 12 15:02:42 crc kubenswrapper[4778]: I0312 15:02:42.963917 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-catalog-content\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:42 crc kubenswrapper[4778]: I0312 15:02:42.964414 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24lfq\" (UniqueName: \"kubernetes.io/projected/9c3f77a6-b73d-4572-9e3c-57622161ebab-kube-api-access-24lfq\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:42 crc kubenswrapper[4778]: I0312 15:02:42.964644 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-utilities\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.066256 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24lfq\" (UniqueName: \"kubernetes.io/projected/9c3f77a6-b73d-4572-9e3c-57622161ebab-kube-api-access-24lfq\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.066355 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-utilities\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.066409 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-catalog-content\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.067046 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-catalog-content\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.067073 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-utilities\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.096584 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24lfq\" (UniqueName: \"kubernetes.io/projected/9c3f77a6-b73d-4572-9e3c-57622161ebab-kube-api-access-24lfq\") pod \"community-operators-h8jzw\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.173445 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.735585 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h8jzw"] Mar 12 15:02:43 crc kubenswrapper[4778]: I0312 15:02:43.883273 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8jzw" event={"ID":"9c3f77a6-b73d-4572-9e3c-57622161ebab","Type":"ContainerStarted","Data":"bc58286ff5647c69df9b0e6066c0da0456949211defbad08ed82bfddbfdcb9d5"} Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.652018 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfdzw"] Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.655949 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.665557 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfdzw"] Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.700585 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-catalog-content\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.700680 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxq6\" (UniqueName: \"kubernetes.io/projected/d8e947a7-cded-4a65-9b13-96116f14554a-kube-api-access-mnxq6\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.700715 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-utilities\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.802156 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-catalog-content\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.802248 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnxq6\" (UniqueName: \"kubernetes.io/projected/d8e947a7-cded-4a65-9b13-96116f14554a-kube-api-access-mnxq6\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.802274 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-utilities\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.802773 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-utilities\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.802998 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-catalog-content\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.826850 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnxq6\" (UniqueName: \"kubernetes.io/projected/d8e947a7-cded-4a65-9b13-96116f14554a-kube-api-access-mnxq6\") pod \"redhat-marketplace-wfdzw\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.893831 4778 generic.go:334] "Generic (PLEG): container finished" podID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerID="e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f" exitCode=0 Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.893869 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8jzw" event={"ID":"9c3f77a6-b73d-4572-9e3c-57622161ebab","Type":"ContainerDied","Data":"e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f"} Mar 12 15:02:44 crc kubenswrapper[4778]: I0312 15:02:44.983755 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:45 crc kubenswrapper[4778]: I0312 15:02:45.432914 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfdzw"] Mar 12 15:02:45 crc kubenswrapper[4778]: I0312 15:02:45.905546 4778 generic.go:334] "Generic (PLEG): container finished" podID="d8e947a7-cded-4a65-9b13-96116f14554a" containerID="ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517" exitCode=0 Mar 12 15:02:45 crc kubenswrapper[4778]: I0312 15:02:45.905703 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfdzw" event={"ID":"d8e947a7-cded-4a65-9b13-96116f14554a","Type":"ContainerDied","Data":"ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517"} Mar 12 15:02:45 crc kubenswrapper[4778]: I0312 15:02:45.906088 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfdzw" event={"ID":"d8e947a7-cded-4a65-9b13-96116f14554a","Type":"ContainerStarted","Data":"fc06856e4d46053cfe775ea619396f83fa56c6a09a352c79ce3aaf57d3b1e242"} Mar 12 15:02:45 crc kubenswrapper[4778]: I0312 15:02:45.912255 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8jzw" event={"ID":"9c3f77a6-b73d-4572-9e3c-57622161ebab","Type":"ContainerStarted","Data":"8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91"} Mar 12 15:02:46 crc kubenswrapper[4778]: E0312 15:02:46.801629 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c3f77a6_b73d_4572_9e3c_57622161ebab.slice/crio-conmon-8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91.scope\": RecentStats: unable to find data in memory cache]" Mar 12 15:02:46 crc kubenswrapper[4778]: I0312 15:02:46.923350 4778 generic.go:334] "Generic (PLEG): container finished" podID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerID="8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91" exitCode=0 Mar 12 15:02:46 crc kubenswrapper[4778]: I0312 15:02:46.923477 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8jzw" event={"ID":"9c3f77a6-b73d-4572-9e3c-57622161ebab","Type":"ContainerDied","Data":"8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91"} Mar 12 15:02:47 crc kubenswrapper[4778]: I0312 15:02:47.935239 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfdzw" event={"ID":"d8e947a7-cded-4a65-9b13-96116f14554a","Type":"ContainerStarted","Data":"825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b"} Mar 12 15:02:47 crc kubenswrapper[4778]: I0312 15:02:47.938754 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8jzw" event={"ID":"9c3f77a6-b73d-4572-9e3c-57622161ebab","Type":"ContainerStarted","Data":"e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822"} Mar 12 15:02:47 crc kubenswrapper[4778]: I0312 15:02:47.990396 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h8jzw" podStartSLOduration=3.453537838 podStartE2EDuration="5.990371287s" podCreationTimestamp="2026-03-12 15:02:42 +0000 UTC" firstStartedPulling="2026-03-12 15:02:44.895915289 +0000 UTC m=+6783.344610685" lastFinishedPulling="2026-03-12 15:02:47.432748738 +0000 UTC m=+6785.881444134" observedRunningTime="2026-03-12 15:02:47.989200033 +0000 UTC m=+6786.437895429" watchObservedRunningTime="2026-03-12 15:02:47.990371287 +0000 UTC m=+6786.439066683" Mar 12 15:02:48 crc kubenswrapper[4778]: I0312 15:02:48.947927 4778 generic.go:334] "Generic (PLEG): container finished" podID="d8e947a7-cded-4a65-9b13-96116f14554a" containerID="825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b" exitCode=0 Mar 12 15:02:48 crc kubenswrapper[4778]: I0312 15:02:48.949361 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfdzw" event={"ID":"d8e947a7-cded-4a65-9b13-96116f14554a","Type":"ContainerDied","Data":"825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b"} Mar 12 15:02:49 crc kubenswrapper[4778]: I0312 15:02:49.958953 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfdzw" event={"ID":"d8e947a7-cded-4a65-9b13-96116f14554a","Type":"ContainerStarted","Data":"d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc"} Mar 12 15:02:49 crc kubenswrapper[4778]: I0312 15:02:49.986656 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfdzw" podStartSLOduration=2.4527912179999998 podStartE2EDuration="5.98663979s" podCreationTimestamp="2026-03-12 15:02:44 +0000 UTC" firstStartedPulling="2026-03-12 15:02:45.907547591 +0000 UTC m=+6784.356242987" lastFinishedPulling="2026-03-12 15:02:49.441396163 +0000 UTC m=+6787.890091559" observedRunningTime="2026-03-12 15:02:49.97714872 +0000 UTC m=+6788.425844116" watchObservedRunningTime="2026-03-12 15:02:49.98663979 +0000 UTC m=+6788.435335186" Mar 12 15:02:53 crc kubenswrapper[4778]: I0312 15:02:53.176599 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:53 crc kubenswrapper[4778]: I0312 15:02:53.177147 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:02:54 crc kubenswrapper[4778]: I0312 15:02:54.236673 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-h8jzw" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="registry-server" probeResult="failure" output=< Mar 12 15:02:54 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 15:02:54 crc kubenswrapper[4778]: > Mar 12 15:02:54 crc kubenswrapper[4778]: I0312 15:02:54.984538 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:54 crc kubenswrapper[4778]: I0312 15:02:54.984608 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:02:56 crc kubenswrapper[4778]: I0312 15:02:56.038779 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wfdzw" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="registry-server" probeResult="failure" output=< Mar 12 15:02:56 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 15:02:56 crc kubenswrapper[4778]: > Mar 12 15:03:03 crc kubenswrapper[4778]: I0312 15:03:03.225897 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:03:03 crc kubenswrapper[4778]: I0312 15:03:03.293496 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:03:03 crc kubenswrapper[4778]: I0312 15:03:03.462443 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8jzw"] Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.039526 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.083703 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.093467 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h8jzw" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="registry-server" containerID="cri-o://e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822" gracePeriod=2 Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.616506 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.737410 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-catalog-content\") pod \"9c3f77a6-b73d-4572-9e3c-57622161ebab\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.737571 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24lfq\" (UniqueName: \"kubernetes.io/projected/9c3f77a6-b73d-4572-9e3c-57622161ebab-kube-api-access-24lfq\") pod \"9c3f77a6-b73d-4572-9e3c-57622161ebab\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.737703 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-utilities\") pod \"9c3f77a6-b73d-4572-9e3c-57622161ebab\" (UID: \"9c3f77a6-b73d-4572-9e3c-57622161ebab\") " Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.738456 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-utilities" (OuterVolumeSpecName: "utilities") pod "9c3f77a6-b73d-4572-9e3c-57622161ebab" (UID: "9c3f77a6-b73d-4572-9e3c-57622161ebab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.744679 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3f77a6-b73d-4572-9e3c-57622161ebab-kube-api-access-24lfq" (OuterVolumeSpecName: "kube-api-access-24lfq") pod "9c3f77a6-b73d-4572-9e3c-57622161ebab" (UID: "9c3f77a6-b73d-4572-9e3c-57622161ebab"). InnerVolumeSpecName "kube-api-access-24lfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.797911 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c3f77a6-b73d-4572-9e3c-57622161ebab" (UID: "9c3f77a6-b73d-4572-9e3c-57622161ebab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.840399 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24lfq\" (UniqueName: \"kubernetes.io/projected/9c3f77a6-b73d-4572-9e3c-57622161ebab-kube-api-access-24lfq\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.840436 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:05 crc kubenswrapper[4778]: I0312 15:03:05.840447 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c3f77a6-b73d-4572-9e3c-57622161ebab-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.060991 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfdzw"] Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.102106 4778 generic.go:334] "Generic (PLEG): container finished" podID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerID="e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822" exitCode=0 Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.102290 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfdzw" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="registry-server" containerID="cri-o://d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc" gracePeriod=2 Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.102587 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h8jzw" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.107258 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8jzw" event={"ID":"9c3f77a6-b73d-4572-9e3c-57622161ebab","Type":"ContainerDied","Data":"e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822"} Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.107304 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h8jzw" event={"ID":"9c3f77a6-b73d-4572-9e3c-57622161ebab","Type":"ContainerDied","Data":"bc58286ff5647c69df9b0e6066c0da0456949211defbad08ed82bfddbfdcb9d5"} Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.107326 4778 scope.go:117] "RemoveContainer" containerID="e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.143646 4778 scope.go:117] "RemoveContainer" containerID="8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.159062 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h8jzw"] Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.167423 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h8jzw"] Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.171437 4778 scope.go:117] "RemoveContainer" containerID="e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.268470 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" path="/var/lib/kubelet/pods/9c3f77a6-b73d-4572-9e3c-57622161ebab/volumes" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.326853 4778 scope.go:117] "RemoveContainer" containerID="e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822" Mar 12 15:03:06 crc kubenswrapper[4778]: E0312 15:03:06.327568 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822\": container with ID starting with e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822 not found: ID does not exist" containerID="e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.327598 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822"} err="failed to get container status \"e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822\": rpc error: code = NotFound desc = could not find container \"e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822\": container with ID starting with e2eaa7378d99f8573e564b6e7f6cad396b4f6de5d0a6f8d2464a5ad678335822 not found: ID does not exist" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.327618 4778 scope.go:117] "RemoveContainer" containerID="8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91" Mar 12 15:03:06 crc kubenswrapper[4778]: E0312 15:03:06.328588 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91\": container with ID starting with 8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91 not found: ID does not exist" containerID="8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.328641 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91"} err="failed to get container status \"8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91\": rpc error: code = NotFound desc = could not find container \"8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91\": container with ID starting with 8299d570ee7a131ad1e48e458fb394fad06615e92119ee1cd3fcb7838dce6a91 not found: ID does not exist" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.328676 4778 scope.go:117] "RemoveContainer" containerID="e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f" Mar 12 15:03:06 crc kubenswrapper[4778]: E0312 15:03:06.328924 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f\": container with ID starting with e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f not found: ID does not exist" containerID="e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.328953 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f"} err="failed to get container status \"e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f\": rpc error: code = NotFound desc = could not find container \"e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f\": container with ID starting with e805e212c2a8d910e4812597491e17968ae4969f60f7e1630b50cf7c475b216f not found: ID does not exist" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.543834 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.658957 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-catalog-content\") pod \"d8e947a7-cded-4a65-9b13-96116f14554a\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.659237 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-utilities\") pod \"d8e947a7-cded-4a65-9b13-96116f14554a\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.659290 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnxq6\" (UniqueName: \"kubernetes.io/projected/d8e947a7-cded-4a65-9b13-96116f14554a-kube-api-access-mnxq6\") pod \"d8e947a7-cded-4a65-9b13-96116f14554a\" (UID: \"d8e947a7-cded-4a65-9b13-96116f14554a\") " Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.661920 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-utilities" (OuterVolumeSpecName: "utilities") pod "d8e947a7-cded-4a65-9b13-96116f14554a" (UID: "d8e947a7-cded-4a65-9b13-96116f14554a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.665229 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e947a7-cded-4a65-9b13-96116f14554a-kube-api-access-mnxq6" (OuterVolumeSpecName: "kube-api-access-mnxq6") pod "d8e947a7-cded-4a65-9b13-96116f14554a" (UID: "d8e947a7-cded-4a65-9b13-96116f14554a"). InnerVolumeSpecName "kube-api-access-mnxq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.694329 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8e947a7-cded-4a65-9b13-96116f14554a" (UID: "d8e947a7-cded-4a65-9b13-96116f14554a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.761084 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.761124 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8e947a7-cded-4a65-9b13-96116f14554a-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:06 crc kubenswrapper[4778]: I0312 15:03:06.761135 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnxq6\" (UniqueName: \"kubernetes.io/projected/d8e947a7-cded-4a65-9b13-96116f14554a-kube-api-access-mnxq6\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.114965 4778 generic.go:334] "Generic (PLEG): container finished" podID="d8e947a7-cded-4a65-9b13-96116f14554a" containerID="d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc" exitCode=0 Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.115136 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfdzw" event={"ID":"d8e947a7-cded-4a65-9b13-96116f14554a","Type":"ContainerDied","Data":"d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc"} Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.115198 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfdzw" event={"ID":"d8e947a7-cded-4a65-9b13-96116f14554a","Type":"ContainerDied","Data":"fc06856e4d46053cfe775ea619396f83fa56c6a09a352c79ce3aaf57d3b1e242"} Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.115216 4778 scope.go:117] "RemoveContainer" containerID="d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.116809 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfdzw" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.146345 4778 scope.go:117] "RemoveContainer" containerID="825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.172531 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfdzw"] Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.182922 4778 scope.go:117] "RemoveContainer" containerID="ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.183423 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfdzw"] Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.211753 4778 scope.go:117] "RemoveContainer" containerID="d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc" Mar 12 15:03:07 crc kubenswrapper[4778]: E0312 15:03:07.212365 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc\": container with ID starting with d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc not found: ID does not exist" containerID="d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.212423 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc"} err="failed to get container status \"d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc\": rpc error: code = NotFound desc = could not find container \"d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc\": container with ID starting with d8872ce63bbd32f8c586ac1d8dd82e4953cef5369d5780daadde079262b016fc not found: ID does not exist" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.212455 4778 scope.go:117] "RemoveContainer" containerID="825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b" Mar 12 15:03:07 crc kubenswrapper[4778]: E0312 15:03:07.215068 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b\": container with ID starting with 825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b not found: ID does not exist" containerID="825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.215130 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b"} err="failed to get container status \"825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b\": rpc error: code = NotFound desc = could not find container \"825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b\": container with ID starting with 825a3ee224fa4e45b5da52433ce6b8e182cae07f2f9fc2009e00554dfb8ffd2b not found: ID does not exist" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.215164 4778 scope.go:117] "RemoveContainer" containerID="ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517" Mar 12 15:03:07 crc kubenswrapper[4778]: E0312 15:03:07.215696 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517\": container with ID starting with ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517 not found: ID does not exist" containerID="ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517" Mar 12 15:03:07 crc kubenswrapper[4778]: I0312 15:03:07.215731 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517"} err="failed to get container status \"ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517\": rpc error: code = NotFound desc = could not find container \"ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517\": container with ID starting with ca2b00af1cbd99c0d40d91469371bc813d95cbd3211d2c77ddb4bffd816ae517 not found: ID does not exist" Mar 12 15:03:07 crc kubenswrapper[4778]: E0312 15:03:07.344811 4778 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e947a7_cded_4a65_9b13_96116f14554a.slice/crio-fc06856e4d46053cfe775ea619396f83fa56c6a09a352c79ce3aaf57d3b1e242\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e947a7_cded_4a65_9b13_96116f14554a.slice\": RecentStats: unable to find data in memory cache]" Mar 12 15:03:08 crc kubenswrapper[4778]: I0312 15:03:08.267677 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" path="/var/lib/kubelet/pods/d8e947a7-cded-4a65-9b13-96116f14554a/volumes" Mar 12 15:03:11 crc kubenswrapper[4778]: I0312 15:03:11.160341 4778 generic.go:334] "Generic (PLEG): container finished" podID="25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657" containerID="32e69f17a15da926e453ec4388e2482d274516709eb37f6124496feae6a6509f" exitCode=0 Mar 12 15:03:11 crc kubenswrapper[4778]: I0312 15:03:11.160434 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" event={"ID":"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657","Type":"ContainerDied","Data":"32e69f17a15da926e453ec4388e2482d274516709eb37f6124496feae6a6509f"} Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.285705 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.317092 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-2n5vv"] Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.329011 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-2n5vv"] Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.478778 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-host\") pod \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\" (UID: \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\") " Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.478855 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9m8w\" (UniqueName: \"kubernetes.io/projected/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-kube-api-access-d9m8w\") pod \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\" (UID: \"25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657\") " Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.479337 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-host" (OuterVolumeSpecName: "host") pod "25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657" (UID: "25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.479702 4778 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.485978 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-kube-api-access-d9m8w" (OuterVolumeSpecName: "kube-api-access-d9m8w") pod "25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657" (UID: "25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657"). InnerVolumeSpecName "kube-api-access-d9m8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:03:12 crc kubenswrapper[4778]: I0312 15:03:12.581995 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9m8w\" (UniqueName: \"kubernetes.io/projected/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657-kube-api-access-d9m8w\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.178668 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f570da92f9fd8977fa9f3cbc8d1dfd2eab1360e49ff3877e319f0a0fea4cfa" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.178730 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-2n5vv" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.506783 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-skmxt"] Mar 12 15:03:13 crc kubenswrapper[4778]: E0312 15:03:13.507289 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="registry-server" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507306 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="registry-server" Mar 12 15:03:13 crc kubenswrapper[4778]: E0312 15:03:13.507325 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="extract-utilities" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507333 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="extract-utilities" Mar 12 15:03:13 crc kubenswrapper[4778]: E0312 15:03:13.507398 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="extract-utilities" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507409 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="extract-utilities" Mar 12 15:03:13 crc kubenswrapper[4778]: E0312 15:03:13.507450 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="registry-server" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507459 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="registry-server" Mar 12 15:03:13 crc kubenswrapper[4778]: E0312 15:03:13.507475 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="extract-content" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507482 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="extract-content" Mar 12 15:03:13 crc kubenswrapper[4778]: E0312 15:03:13.507501 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657" containerName="container-00" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507509 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657" containerName="container-00" Mar 12 15:03:13 crc kubenswrapper[4778]: E0312 15:03:13.507520 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="extract-content" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507528 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="extract-content" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507770 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3f77a6-b73d-4572-9e3c-57622161ebab" containerName="registry-server" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507790 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657" containerName="container-00" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.507809 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e947a7-cded-4a65-9b13-96116f14554a" containerName="registry-server" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.508517 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.700904 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5djj\" (UniqueName: \"kubernetes.io/projected/434eb649-8d17-4d84-977f-a1907290d0f4-kube-api-access-x5djj\") pod \"crc-debug-skmxt\" (UID: \"434eb649-8d17-4d84-977f-a1907290d0f4\") " pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.700968 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/434eb649-8d17-4d84-977f-a1907290d0f4-host\") pod \"crc-debug-skmxt\" (UID: \"434eb649-8d17-4d84-977f-a1907290d0f4\") " pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.803631 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5djj\" (UniqueName: \"kubernetes.io/projected/434eb649-8d17-4d84-977f-a1907290d0f4-kube-api-access-x5djj\") pod \"crc-debug-skmxt\" (UID: \"434eb649-8d17-4d84-977f-a1907290d0f4\") " pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.803724 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/434eb649-8d17-4d84-977f-a1907290d0f4-host\") pod \"crc-debug-skmxt\" (UID: \"434eb649-8d17-4d84-977f-a1907290d0f4\") " pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.803825 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/434eb649-8d17-4d84-977f-a1907290d0f4-host\") pod \"crc-debug-skmxt\" (UID: \"434eb649-8d17-4d84-977f-a1907290d0f4\") " pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.823519 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5djj\" (UniqueName: \"kubernetes.io/projected/434eb649-8d17-4d84-977f-a1907290d0f4-kube-api-access-x5djj\") pod \"crc-debug-skmxt\" (UID: \"434eb649-8d17-4d84-977f-a1907290d0f4\") " pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:13 crc kubenswrapper[4778]: I0312 15:03:13.828671 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:14 crc kubenswrapper[4778]: I0312 15:03:14.188508 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/crc-debug-skmxt" event={"ID":"434eb649-8d17-4d84-977f-a1907290d0f4","Type":"ContainerStarted","Data":"e0b26a87b52c43c4e608ab6f59fc6362851c87a5e28dc029fccea83b1c3e5e7d"} Mar 12 15:03:14 crc kubenswrapper[4778]: I0312 15:03:14.189040 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/crc-debug-skmxt" event={"ID":"434eb649-8d17-4d84-977f-a1907290d0f4","Type":"ContainerStarted","Data":"00e4b5ab7c733c109c6095f1277f619b9996f581fc3689633c33016f34739ddf"} Mar 12 15:03:14 crc kubenswrapper[4778]: I0312 15:03:14.216468 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rkpvq/crc-debug-skmxt" podStartSLOduration=1.216445107 podStartE2EDuration="1.216445107s" podCreationTimestamp="2026-03-12 15:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:03:14.203278922 +0000 UTC m=+6812.651974328" watchObservedRunningTime="2026-03-12 15:03:14.216445107 +0000 UTC m=+6812.665140503" Mar 12 15:03:14 crc kubenswrapper[4778]: I0312 15:03:14.276098 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657" path="/var/lib/kubelet/pods/25b8a8bd-e1c2-44d9-8a16-a1efc4aaf657/volumes" Mar 12 15:03:15 crc kubenswrapper[4778]: I0312 15:03:15.199291 4778 generic.go:334] "Generic (PLEG): container finished" podID="434eb649-8d17-4d84-977f-a1907290d0f4" containerID="e0b26a87b52c43c4e608ab6f59fc6362851c87a5e28dc029fccea83b1c3e5e7d" exitCode=0 Mar 12 15:03:15 crc kubenswrapper[4778]: I0312 15:03:15.199580 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/crc-debug-skmxt" event={"ID":"434eb649-8d17-4d84-977f-a1907290d0f4","Type":"ContainerDied","Data":"e0b26a87b52c43c4e608ab6f59fc6362851c87a5e28dc029fccea83b1c3e5e7d"} Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.307434 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.356852 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-skmxt"] Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.364558 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-skmxt"] Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.451424 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/434eb649-8d17-4d84-977f-a1907290d0f4-host\") pod \"434eb649-8d17-4d84-977f-a1907290d0f4\" (UID: \"434eb649-8d17-4d84-977f-a1907290d0f4\") " Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.451566 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5djj\" (UniqueName: \"kubernetes.io/projected/434eb649-8d17-4d84-977f-a1907290d0f4-kube-api-access-x5djj\") pod \"434eb649-8d17-4d84-977f-a1907290d0f4\" (UID: \"434eb649-8d17-4d84-977f-a1907290d0f4\") " Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.452523 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/434eb649-8d17-4d84-977f-a1907290d0f4-host" (OuterVolumeSpecName: "host") pod "434eb649-8d17-4d84-977f-a1907290d0f4" (UID: "434eb649-8d17-4d84-977f-a1907290d0f4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.461063 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434eb649-8d17-4d84-977f-a1907290d0f4-kube-api-access-x5djj" (OuterVolumeSpecName: "kube-api-access-x5djj") pod "434eb649-8d17-4d84-977f-a1907290d0f4" (UID: "434eb649-8d17-4d84-977f-a1907290d0f4"). InnerVolumeSpecName "kube-api-access-x5djj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.554012 4778 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/434eb649-8d17-4d84-977f-a1907290d0f4-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:16 crc kubenswrapper[4778]: I0312 15:03:16.554353 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5djj\" (UniqueName: \"kubernetes.io/projected/434eb649-8d17-4d84-977f-a1907290d0f4-kube-api-access-x5djj\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.216743 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e4b5ab7c733c109c6095f1277f619b9996f581fc3689633c33016f34739ddf" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.216805 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-skmxt" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.636279 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-njj2l"] Mar 12 15:03:17 crc kubenswrapper[4778]: E0312 15:03:17.636862 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434eb649-8d17-4d84-977f-a1907290d0f4" containerName="container-00" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.636875 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="434eb649-8d17-4d84-977f-a1907290d0f4" containerName="container-00" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.637066 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="434eb649-8d17-4d84-977f-a1907290d0f4" containerName="container-00" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.637708 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.675981 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-host\") pod \"crc-debug-njj2l\" (UID: \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\") " pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.676046 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-576qw\" (UniqueName: \"kubernetes.io/projected/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-kube-api-access-576qw\") pod \"crc-debug-njj2l\" (UID: \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\") " pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.778231 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-host\") pod \"crc-debug-njj2l\" (UID: \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\") " pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.778295 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-576qw\" (UniqueName: \"kubernetes.io/projected/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-kube-api-access-576qw\") pod \"crc-debug-njj2l\" (UID: \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\") " pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.778845 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-host\") pod \"crc-debug-njj2l\" (UID: \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\") " pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.795702 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-576qw\" (UniqueName: \"kubernetes.io/projected/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-kube-api-access-576qw\") pod \"crc-debug-njj2l\" (UID: \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\") " pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:17 crc kubenswrapper[4778]: I0312 15:03:17.958720 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:17 crc kubenswrapper[4778]: W0312 15:03:17.998316 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b5de73b_7d89_4665_bd2a_c0efc03f4e1d.slice/crio-4784c68754a85c758318ccd5a6b608473f5e1fd2b1db1f7dffff85a1bff83cbd WatchSource:0}: Error finding container 4784c68754a85c758318ccd5a6b608473f5e1fd2b1db1f7dffff85a1bff83cbd: Status 404 returned error can't find the container with id 4784c68754a85c758318ccd5a6b608473f5e1fd2b1db1f7dffff85a1bff83cbd Mar 12 15:03:18 crc kubenswrapper[4778]: I0312 15:03:18.225265 4778 generic.go:334] "Generic (PLEG): container finished" podID="4b5de73b-7d89-4665-bd2a-c0efc03f4e1d" containerID="74340f2038c644c2a2c001699df4f77fd8e1cf73ce4885bad06a1749c4f74a6f" exitCode=0 Mar 12 15:03:18 crc kubenswrapper[4778]: I0312 15:03:18.225306 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/crc-debug-njj2l" event={"ID":"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d","Type":"ContainerDied","Data":"74340f2038c644c2a2c001699df4f77fd8e1cf73ce4885bad06a1749c4f74a6f"} Mar 12 15:03:18 crc kubenswrapper[4778]: I0312 15:03:18.225330 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/crc-debug-njj2l" event={"ID":"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d","Type":"ContainerStarted","Data":"4784c68754a85c758318ccd5a6b608473f5e1fd2b1db1f7dffff85a1bff83cbd"} Mar 12 15:03:18 crc kubenswrapper[4778]: I0312 15:03:18.269407 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="434eb649-8d17-4d84-977f-a1907290d0f4" path="/var/lib/kubelet/pods/434eb649-8d17-4d84-977f-a1907290d0f4/volumes" Mar 12 15:03:18 crc kubenswrapper[4778]: I0312 15:03:18.270059 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-njj2l"] Mar 12 15:03:18 crc kubenswrapper[4778]: I0312 15:03:18.271649 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rkpvq/crc-debug-njj2l"] Mar 12 15:03:19 crc kubenswrapper[4778]: I0312 15:03:19.334940 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:19 crc kubenswrapper[4778]: I0312 15:03:19.508996 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-576qw\" (UniqueName: \"kubernetes.io/projected/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-kube-api-access-576qw\") pod \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\" (UID: \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\") " Mar 12 15:03:19 crc kubenswrapper[4778]: I0312 15:03:19.509334 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-host\") pod \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\" (UID: \"4b5de73b-7d89-4665-bd2a-c0efc03f4e1d\") " Mar 12 15:03:19 crc kubenswrapper[4778]: I0312 15:03:19.509563 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-host" (OuterVolumeSpecName: "host") pod "4b5de73b-7d89-4665-bd2a-c0efc03f4e1d" (UID: "4b5de73b-7d89-4665-bd2a-c0efc03f4e1d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:03:19 crc kubenswrapper[4778]: I0312 15:03:19.509908 4778 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:19 crc kubenswrapper[4778]: I0312 15:03:19.514117 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-kube-api-access-576qw" (OuterVolumeSpecName: "kube-api-access-576qw") pod "4b5de73b-7d89-4665-bd2a-c0efc03f4e1d" (UID: "4b5de73b-7d89-4665-bd2a-c0efc03f4e1d"). InnerVolumeSpecName "kube-api-access-576qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:03:19 crc kubenswrapper[4778]: I0312 15:03:19.611799 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-576qw\" (UniqueName: \"kubernetes.io/projected/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d-kube-api-access-576qw\") on node \"crc\" DevicePath \"\"" Mar 12 15:03:20 crc kubenswrapper[4778]: I0312 15:03:20.252957 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4784c68754a85c758318ccd5a6b608473f5e1fd2b1db1f7dffff85a1bff83cbd" Mar 12 15:03:20 crc kubenswrapper[4778]: I0312 15:03:20.253022 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/crc-debug-njj2l" Mar 12 15:03:20 crc kubenswrapper[4778]: I0312 15:03:20.271621 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5de73b-7d89-4665-bd2a-c0efc03f4e1d" path="/var/lib/kubelet/pods/4b5de73b-7d89-4665-bd2a-c0efc03f4e1d/volumes" Mar 12 15:03:40 crc kubenswrapper[4778]: I0312 15:03:40.956086 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lvp8p"] Mar 12 15:03:40 crc kubenswrapper[4778]: E0312 15:03:40.956939 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5de73b-7d89-4665-bd2a-c0efc03f4e1d" containerName="container-00" Mar 12 15:03:40 crc kubenswrapper[4778]: I0312 15:03:40.956952 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5de73b-7d89-4665-bd2a-c0efc03f4e1d" containerName="container-00" Mar 12 15:03:40 crc kubenswrapper[4778]: I0312 15:03:40.957160 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5de73b-7d89-4665-bd2a-c0efc03f4e1d" containerName="container-00" Mar 12 15:03:40 crc kubenswrapper[4778]: I0312 15:03:40.962386 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:40 crc kubenswrapper[4778]: I0312 15:03:40.973317 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvp8p"] Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.071788 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca67e14c-855d-473a-99b0-fe9dabb57916-catalog-content\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.071919 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca67e14c-855d-473a-99b0-fe9dabb57916-utilities\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.071962 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7xc\" (UniqueName: \"kubernetes.io/projected/ca67e14c-855d-473a-99b0-fe9dabb57916-kube-api-access-sb7xc\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.173220 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca67e14c-855d-473a-99b0-fe9dabb57916-utilities\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.173525 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7xc\" (UniqueName: \"kubernetes.io/projected/ca67e14c-855d-473a-99b0-fe9dabb57916-kube-api-access-sb7xc\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.173694 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca67e14c-855d-473a-99b0-fe9dabb57916-catalog-content\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.173836 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca67e14c-855d-473a-99b0-fe9dabb57916-utilities\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.174166 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca67e14c-855d-473a-99b0-fe9dabb57916-catalog-content\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.200123 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7xc\" (UniqueName: \"kubernetes.io/projected/ca67e14c-855d-473a-99b0-fe9dabb57916-kube-api-access-sb7xc\") pod \"redhat-operators-lvp8p\" (UID: \"ca67e14c-855d-473a-99b0-fe9dabb57916\") " pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.341365 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:03:41 crc kubenswrapper[4778]: I0312 15:03:41.845298 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvp8p"] Mar 12 15:03:42 crc kubenswrapper[4778]: I0312 15:03:42.484406 4778 generic.go:334] "Generic (PLEG): container finished" podID="ca67e14c-855d-473a-99b0-fe9dabb57916" containerID="f5cf71f2c30496fc349cd115c9d22054161bef05f8a6c2dac0d3f20c006fccc5" exitCode=0 Mar 12 15:03:42 crc kubenswrapper[4778]: I0312 15:03:42.484487 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvp8p" event={"ID":"ca67e14c-855d-473a-99b0-fe9dabb57916","Type":"ContainerDied","Data":"f5cf71f2c30496fc349cd115c9d22054161bef05f8a6c2dac0d3f20c006fccc5"} Mar 12 15:03:42 crc kubenswrapper[4778]: I0312 15:03:42.484748 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvp8p" event={"ID":"ca67e14c-855d-473a-99b0-fe9dabb57916","Type":"ContainerStarted","Data":"5d37b81f2935c5aba929232694de9ae2f8c860dc2b2b539291da26409c61717c"} Mar 12 15:03:42 crc kubenswrapper[4778]: I0312 15:03:42.487799 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:03:48 crc kubenswrapper[4778]: I0312 15:03:48.801823 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86cb765474-5pq5z_6bd172c5-383f-4273-98a5-2c92223dc765/barbican-api/0.log" Mar 12 15:03:48 crc kubenswrapper[4778]: I0312 15:03:48.953664 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86cb765474-5pq5z_6bd172c5-383f-4273-98a5-2c92223dc765/barbican-api-log/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.065506 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65c9994dfd-xznqh_8ee1f546-8428-4b23-93e4-b8370fd4224b/barbican-keystone-listener/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.188637 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dcf9787-ngc87_d505bb59-3c9e-4cfa-891c-c8e0068e2567/barbican-worker/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.336503 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65c9994dfd-xznqh_8ee1f546-8428-4b23-93e4-b8370fd4224b/barbican-keystone-listener-log/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.345167 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dcf9787-ngc87_d505bb59-3c9e-4cfa-891c-c8e0068e2567/barbican-worker-log/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.476689 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx_b99627a8-43d8-4f7d-90f7-530eda3c2213/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.577414 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/ceilometer-central-agent/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.714064 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/proxy-httpd/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.762218 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/sg-core/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.805794 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/ceilometer-notification-agent/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.970952 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_99f72014-50e8-4dd4-9764-1b2c7d546b30/cinder-api/0.log" Mar 12 15:03:49 crc kubenswrapper[4778]: I0312 15:03:49.995240 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_39ee2404-53a8-4598-8c4b-c3a34fbf3480/cinder-scheduler/0.log" Mar 12 15:03:50 crc kubenswrapper[4778]: I0312 15:03:50.026674 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_99f72014-50e8-4dd4-9764-1b2c7d546b30/cinder-api-log/0.log" Mar 12 15:03:50 crc kubenswrapper[4778]: I0312 15:03:50.204548 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_39ee2404-53a8-4598-8c4b-c3a34fbf3480/probe/0.log" Mar 12 15:03:50 crc kubenswrapper[4778]: I0312 15:03:50.222774 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4szjl_5c5541f3-fb44-476b-91c2-b07dffe50894/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:03:50 crc kubenswrapper[4778]: I0312 15:03:50.437685 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6_36bb4acd-fab3-4998-a8cd-a6ebcc800fc8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:03:50 crc kubenswrapper[4778]: I0312 15:03:50.501103 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f89cfcd7f-vk6h4_46f34397-57fe-425d-b69d-040f4384ac69/init/0.log" Mar 12 15:03:50 crc kubenswrapper[4778]: I0312 15:03:50.668451 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f89cfcd7f-vk6h4_46f34397-57fe-425d-b69d-040f4384ac69/init/0.log" Mar 12 15:03:50 crc kubenswrapper[4778]: I0312 15:03:50.884589 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2xksx_96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:03:51 crc kubenswrapper[4778]: I0312 15:03:51.091422 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_81c1a05c-5642-43d4-8a7b-229330168332/glance-httpd/0.log" Mar 12 15:03:51 crc kubenswrapper[4778]: I0312 15:03:51.121847 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_81c1a05c-5642-43d4-8a7b-229330168332/glance-log/0.log" Mar 12 15:03:51 crc kubenswrapper[4778]: I0312 15:03:51.207067 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f89cfcd7f-vk6h4_46f34397-57fe-425d-b69d-040f4384ac69/dnsmasq-dns/0.log" Mar 12 15:03:51 crc kubenswrapper[4778]: I0312 15:03:51.361341 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa757af-1c91-4b93-8916-5bbd99b8522e/glance-httpd/0.log" Mar 12 15:03:51 crc kubenswrapper[4778]: I0312 15:03:51.374050 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa757af-1c91-4b93-8916-5bbd99b8522e/glance-log/0.log" Mar 12 15:03:51 crc kubenswrapper[4778]: I0312 15:03:51.493923 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bngcx_f69e6cfe-f7c2-4127-b4df-710725c52227/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:03:51 crc kubenswrapper[4778]: I0312 15:03:51.571566 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g252n_29f8609b-4a3b-42ba-9450-a2b633bb4c2c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:03:52 crc kubenswrapper[4778]: I0312 15:03:52.098984 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555401-vjgkl_e4df6927-3452-4b36-b59a-a1fdcd4272a4/keystone-cron/0.log" Mar 12 15:03:52 crc kubenswrapper[4778]: I0312 15:03:52.341790 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555461-lmqk9_ebdf3274-70cb-4083-bf12-5d1038a9b7ba/keystone-cron/0.log" Mar 12 15:03:52 crc kubenswrapper[4778]: I0312 15:03:52.605654 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_51f24fcd-aff5-4785-abf7-4936180cee78/kube-state-metrics/0.log" Mar 12 15:03:52 crc kubenswrapper[4778]: I0312 15:03:52.931171 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8_8713b951-b516-42bd-9286-4343e5bcc955/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:03:53 crc kubenswrapper[4778]: I0312 15:03:53.247154 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b6dc4885-z4h9m_16dea17b-eaa4-4bbf-8895-c077b3e28d66/keystone-api/0.log" Mar 12 15:03:53 crc kubenswrapper[4778]: I0312 15:03:53.289166 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b6dc4885-6lrlq_a56bb599-f10d-4564-b6bf-48128dc2c7f1/keystone-api/0.log" Mar 12 15:03:54 crc kubenswrapper[4778]: I0312 15:03:54.028759 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-dggmh_7596a69e-33c9-4a2b-89fc-e4c41252b3fd/neutron-httpd/0.log" Mar 12 15:03:54 crc kubenswrapper[4778]: I0312 15:03:54.352310 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-zx97x_8a67d4b7-d8eb-40f4-b51d-62e92c6042c1/neutron-httpd/0.log" Mar 12 15:03:54 crc kubenswrapper[4778]: I0312 15:03:54.478509 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg_5cc410de-5b42-44d1-8b29-37161475730e/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:03:57 crc kubenswrapper[4778]: I0312 15:03:57.836449 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4/nova-api-log/0.log" Mar 12 15:03:59 crc kubenswrapper[4778]: I0312 15:03:59.318409 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-zx97x_8a67d4b7-d8eb-40f4-b51d-62e92c6042c1/neutron-api/0.log" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.168088 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555464-rt6fz"] Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.170421 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.172661 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.172855 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.174143 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.192623 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555464-rt6fz"] Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.269813 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pv9d\" (UniqueName: \"kubernetes.io/projected/bdc35b0a-1b16-4db8-adef-8a6afd6ae934-kube-api-access-6pv9d\") pod \"auto-csr-approver-29555464-rt6fz\" (UID: \"bdc35b0a-1b16-4db8-adef-8a6afd6ae934\") " pod="openshift-infra/auto-csr-approver-29555464-rt6fz" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.371772 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pv9d\" (UniqueName: \"kubernetes.io/projected/bdc35b0a-1b16-4db8-adef-8a6afd6ae934-kube-api-access-6pv9d\") pod \"auto-csr-approver-29555464-rt6fz\" (UID: \"bdc35b0a-1b16-4db8-adef-8a6afd6ae934\") " pod="openshift-infra/auto-csr-approver-29555464-rt6fz" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.404947 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pv9d\" (UniqueName: \"kubernetes.io/projected/bdc35b0a-1b16-4db8-adef-8a6afd6ae934-kube-api-access-6pv9d\") pod \"auto-csr-approver-29555464-rt6fz\" (UID: \"bdc35b0a-1b16-4db8-adef-8a6afd6ae934\") " pod="openshift-infra/auto-csr-approver-29555464-rt6fz" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.426157 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4/nova-api-api/0.log" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.487884 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" Mar 12 15:04:00 crc kubenswrapper[4778]: I0312 15:04:00.661130 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvp8p" event={"ID":"ca67e14c-855d-473a-99b0-fe9dabb57916","Type":"ContainerStarted","Data":"91ee2a929eebacad27622c766c9b3f9578ff1372b845836bc89b764f83c342a3"} Mar 12 15:04:01 crc kubenswrapper[4778]: I0312 15:04:01.267841 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555464-rt6fz"] Mar 12 15:04:01 crc kubenswrapper[4778]: I0312 15:04:01.429469 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-dggmh_7596a69e-33c9-4a2b-89fc-e4c41252b3fd/neutron-api/0.log" Mar 12 15:04:01 crc kubenswrapper[4778]: I0312 15:04:01.682710 4778 generic.go:334] "Generic (PLEG): container finished" podID="ca67e14c-855d-473a-99b0-fe9dabb57916" containerID="91ee2a929eebacad27622c766c9b3f9578ff1372b845836bc89b764f83c342a3" exitCode=0 Mar 12 15:04:01 crc kubenswrapper[4778]: I0312 15:04:01.682795 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvp8p" event={"ID":"ca67e14c-855d-473a-99b0-fe9dabb57916","Type":"ContainerDied","Data":"91ee2a929eebacad27622c766c9b3f9578ff1372b845836bc89b764f83c342a3"} Mar 12 15:04:01 crc kubenswrapper[4778]: I0312 15:04:01.684639 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" event={"ID":"bdc35b0a-1b16-4db8-adef-8a6afd6ae934","Type":"ContainerStarted","Data":"1d4f2fccef9895e998cbb789ad29a604968e2a751b694f840b5450a45509483f"} Mar 12 15:04:01 crc kubenswrapper[4778]: I0312 15:04:01.896557 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_929bb450-949d-4f4f-9c21-de6c3fe32927/nova-cell0-conductor-conductor/0.log" Mar 12 15:04:02 crc kubenswrapper[4778]: I0312 15:04:02.113503 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1466aea3-fa10-49a6-a254-a96a52091aca/nova-cell1-conductor-conductor/0.log" Mar 12 15:04:02 crc kubenswrapper[4778]: I0312 15:04:02.384111 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-1_f0341d80-4327-4c9e-bc11-0cddbc6eab66/nova-api-log/0.log" Mar 12 15:04:02 crc kubenswrapper[4778]: I0312 15:04:02.669783 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-metadata-0_c289a520-78eb-433f-b7a4-0c03be917c18/nova-cell1-metadata-log/0.log" Mar 12 15:04:02 crc kubenswrapper[4778]: I0312 15:04:02.983697 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-1_f0341d80-4327-4c9e-bc11-0cddbc6eab66/nova-api-api/0.log" Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.139247 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.391031 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5tw6s_6ed77f87-e6b2-4c7a-8b0e-003106200dc8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.656612 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-metadata-0_c289a520-78eb-433f-b7a4-0c03be917c18/nova-cell1-metadata-metadata/0.log" Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.672582 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fe52f8ba-9053-4733-b2e3-8f1becf437c8/mysql-bootstrap/0.log" Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.701327 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvp8p" event={"ID":"ca67e14c-855d-473a-99b0-fe9dabb57916","Type":"ContainerStarted","Data":"3b4aa015c90f98a02842f429fad0b0b116679bfae043c37529385708f67f2ddd"} Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.702978 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" event={"ID":"bdc35b0a-1b16-4db8-adef-8a6afd6ae934","Type":"ContainerStarted","Data":"13a5daaa89f4db1da5ea953e47f7efe223f79130e65c2950cad2f908b81e7d1b"} Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.733628 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lvp8p" podStartSLOduration=2.893848084 podStartE2EDuration="23.733602018s" podCreationTimestamp="2026-03-12 15:03:40 +0000 UTC" firstStartedPulling="2026-03-12 15:03:42.487481667 +0000 UTC m=+6840.936177073" lastFinishedPulling="2026-03-12 15:04:03.327235611 +0000 UTC m=+6861.775931007" observedRunningTime="2026-03-12 15:04:03.717698836 +0000 UTC m=+6862.166394232" watchObservedRunningTime="2026-03-12 15:04:03.733602018 +0000 UTC m=+6862.182297414" Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.735084 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" podStartSLOduration=1.637060443 podStartE2EDuration="3.73507896s" podCreationTimestamp="2026-03-12 15:04:00 +0000 UTC" firstStartedPulling="2026-03-12 15:04:01.27234767 +0000 UTC m=+6859.721043066" lastFinishedPulling="2026-03-12 15:04:03.370366187 +0000 UTC m=+6861.819061583" observedRunningTime="2026-03-12 15:04:03.733023992 +0000 UTC m=+6862.181719398" watchObservedRunningTime="2026-03-12 15:04:03.73507896 +0000 UTC m=+6862.183774346" Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.900275 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fe52f8ba-9053-4733-b2e3-8f1becf437c8/mysql-bootstrap/0.log" Mar 12 15:04:03 crc kubenswrapper[4778]: I0312 15:04:03.954873 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fe52f8ba-9053-4733-b2e3-8f1becf437c8/galera/0.log" Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.142069 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_663feb48-0ed1-4947-97c3-e0bac206fdb2/mysql-bootstrap/0.log" Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.377692 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_663feb48-0ed1-4947-97c3-e0bac206fdb2/mysql-bootstrap/0.log" Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.475148 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_663feb48-0ed1-4947-97c3-e0bac206fdb2/galera/0.log" Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.532390 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f613745b-fe33-4918-9e0a-da2a59c55e33/nova-scheduler-scheduler/0.log" Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.630964 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_856cd6d1-db21-4503-94d7-cbf27ca96cc2/openstackclient/0.log" Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.713815 4778 generic.go:334] "Generic (PLEG): container finished" podID="bdc35b0a-1b16-4db8-adef-8a6afd6ae934" containerID="13a5daaa89f4db1da5ea953e47f7efe223f79130e65c2950cad2f908b81e7d1b" exitCode=0 Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.714439 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" event={"ID":"bdc35b0a-1b16-4db8-adef-8a6afd6ae934","Type":"ContainerDied","Data":"13a5daaa89f4db1da5ea953e47f7efe223f79130e65c2950cad2f908b81e7d1b"} Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.922370 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vtt4z_a8484e5d-6f77-407c-81db-0d9b2a6b37fd/openstack-network-exporter/0.log" Mar 12 15:04:04 crc kubenswrapper[4778]: I0312 15:04:04.971828 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4wct6_3b8efd1e-884d-4963-b69f-04ede0a92267/ovn-controller/0.log" Mar 12 15:04:05 crc kubenswrapper[4778]: I0312 15:04:05.211828 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovsdb-server-init/0.log" Mar 12 15:04:05 crc kubenswrapper[4778]: I0312 15:04:05.372875 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovs-vswitchd/0.log" Mar 12 15:04:05 crc kubenswrapper[4778]: I0312 15:04:05.428782 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovsdb-server-init/0.log" Mar 12 15:04:05 crc kubenswrapper[4778]: I0312 15:04:05.431045 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovsdb-server/0.log" Mar 12 15:04:05 crc kubenswrapper[4778]: I0312 15:04:05.648138 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9lbdq_3c0a2200-506d-4ac3-b08c-9b3156c9e573/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:04:05 crc kubenswrapper[4778]: I0312 15:04:05.713926 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b25f9c9-784a-4a52-9bb3-02c6c4592702/openstack-network-exporter/0.log" Mar 12 15:04:05 crc kubenswrapper[4778]: I0312 15:04:05.752302 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b25f9c9-784a-4a52-9bb3-02c6c4592702/ovn-northd/0.log" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.054697 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7321e15e-673c-4e0d-80f8-6ac644c1940f/ovsdbserver-nb/0.log" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.098013 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7321e15e-673c-4e0d-80f8-6ac644c1940f/openstack-network-exporter/0.log" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.174708 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c951c6f-06fd-4793-a95b-26b5c1400d73/openstack-network-exporter/0.log" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.206786 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.296922 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c951c6f-06fd-4793-a95b-26b5c1400d73/ovsdbserver-sb/0.log" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.308622 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pv9d\" (UniqueName: \"kubernetes.io/projected/bdc35b0a-1b16-4db8-adef-8a6afd6ae934-kube-api-access-6pv9d\") pod \"bdc35b0a-1b16-4db8-adef-8a6afd6ae934\" (UID: \"bdc35b0a-1b16-4db8-adef-8a6afd6ae934\") " Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.321350 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc35b0a-1b16-4db8-adef-8a6afd6ae934-kube-api-access-6pv9d" (OuterVolumeSpecName: "kube-api-access-6pv9d") pod "bdc35b0a-1b16-4db8-adef-8a6afd6ae934" (UID: "bdc35b0a-1b16-4db8-adef-8a6afd6ae934"). InnerVolumeSpecName "kube-api-access-6pv9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.410523 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pv9d\" (UniqueName: \"kubernetes.io/projected/bdc35b0a-1b16-4db8-adef-8a6afd6ae934-kube-api-access-6pv9d\") on node \"crc\" DevicePath \"\"" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.683029 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03/setup-container/0.log" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.732162 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" event={"ID":"bdc35b0a-1b16-4db8-adef-8a6afd6ae934","Type":"ContainerDied","Data":"1d4f2fccef9895e998cbb789ad29a604968e2a751b694f840b5450a45509483f"} Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.732205 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555464-rt6fz" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.732210 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d4f2fccef9895e998cbb789ad29a604968e2a751b694f840b5450a45509483f" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.812152 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4d765698-l7bjx_267e7df2-d35c-45c4-af65-e8af31f8f6cf/placement-api/0.log" Mar 12 15:04:06 crc kubenswrapper[4778]: I0312 15:04:06.843687 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03/setup-container/0.log" Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.044734 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03/rabbitmq/0.log" Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.079619 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e89dfcc-2ac3-444c-91e8-56991eae096b/setup-container/0.log" Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.141384 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4d765698-l7bjx_267e7df2-d35c-45c4-af65-e8af31f8f6cf/placement-log/0.log" Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.277719 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555458-2kqth"] Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.287635 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555458-2kqth"] Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.320335 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e89dfcc-2ac3-444c-91e8-56991eae096b/setup-container/0.log" Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.447559 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e89dfcc-2ac3-444c-91e8-56991eae096b/rabbitmq/0.log" Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.452818 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc_43a3ffe4-8b64-4e26-b63a-5254a986e4a4/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.690833 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc_bd7ac6b4-5600-45ce-b0ea-199dd4baefcb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:04:07 crc kubenswrapper[4778]: I0312 15:04:07.721113 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gt58t_b0bb06df-44bb-4939-9492-a6ad3d6b5368/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.010479 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8mmjm_c993b33e-6c36-4524-864a-65da461a8e0c/ssh-known-hosts-edpm-deployment/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.232399 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f887c49f-fw2qd_bbd76cb8-462f-4e60-b755-ef3170e70d11/proxy-server/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.265269 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa9dd73-2656-43b3-a6cb-634d312a166e" path="/var/lib/kubelet/pods/0fa9dd73-2656-43b3-a6cb-634d312a166e/volumes" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.282096 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5knbg_2edc2c90-f91e-402d-809c-514e9d8a5e04/swift-ring-rebalance/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.373327 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f887c49f-fw2qd_bbd76cb8-462f-4e60-b755-ef3170e70d11/proxy-httpd/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.552804 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-auditor/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.561545 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-reaper/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.690137 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-replicator/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.765881 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-server/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.802141 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/container-auditor/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.870173 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/container-replicator/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.965846 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/container-updater/0.log" Mar 12 15:04:08 crc kubenswrapper[4778]: I0312 15:04:08.966934 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/container-server/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.060595 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-auditor/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.151603 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-expirer/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.217435 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-replicator/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.221302 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-server/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.346905 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/rsync/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.351159 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-updater/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.459538 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/swift-recon-cron/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.692163 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s_2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.730873 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_74897d0a-ca7b-4589-bd4c-75910c2d491c/tempest-tests-tempest-tests-runner/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.797260 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_82246f69-2112-44e9-a783-a4a5926188b4/test-operator-logs-container/0.log" Mar 12 15:04:09 crc kubenswrapper[4778]: I0312 15:04:09.992713 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9glvr_41583476-38cd-4c0d-a05a-96ddc5b330ca/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:04:11 crc kubenswrapper[4778]: I0312 15:04:11.341799 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:04:11 crc kubenswrapper[4778]: I0312 15:04:11.342031 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:04:11 crc kubenswrapper[4778]: I0312 15:04:11.395582 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:04:11 crc kubenswrapper[4778]: I0312 15:04:11.827556 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lvp8p" Mar 12 15:04:11 crc kubenswrapper[4778]: I0312 15:04:11.981282 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvp8p"] Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.166746 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r99nz"] Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.166997 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r99nz" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="registry-server" containerID="cri-o://712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31" gracePeriod=2 Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.695227 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.789527 4778 generic.go:334] "Generic (PLEG): container finished" podID="89b39891-5207-4289-807f-57d00acb2937" containerID="712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31" exitCode=0 Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.791531 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r99nz" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.792006 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r99nz" event={"ID":"89b39891-5207-4289-807f-57d00acb2937","Type":"ContainerDied","Data":"712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31"} Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.792041 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r99nz" event={"ID":"89b39891-5207-4289-807f-57d00acb2937","Type":"ContainerDied","Data":"d44d2bee1e2b4ddf99f45277d9dc014b3b21712ebf48e47cb48538e60ac5ff80"} Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.792067 4778 scope.go:117] "RemoveContainer" containerID="712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.820649 4778 scope.go:117] "RemoveContainer" containerID="805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.824731 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mccx\" (UniqueName: \"kubernetes.io/projected/89b39891-5207-4289-807f-57d00acb2937-kube-api-access-7mccx\") pod \"89b39891-5207-4289-807f-57d00acb2937\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.824933 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-utilities\") pod \"89b39891-5207-4289-807f-57d00acb2937\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.825012 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-catalog-content\") pod \"89b39891-5207-4289-807f-57d00acb2937\" (UID: \"89b39891-5207-4289-807f-57d00acb2937\") " Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.830734 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-utilities" (OuterVolumeSpecName: "utilities") pod "89b39891-5207-4289-807f-57d00acb2937" (UID: "89b39891-5207-4289-807f-57d00acb2937"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.835363 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b39891-5207-4289-807f-57d00acb2937-kube-api-access-7mccx" (OuterVolumeSpecName: "kube-api-access-7mccx") pod "89b39891-5207-4289-807f-57d00acb2937" (UID: "89b39891-5207-4289-807f-57d00acb2937"). InnerVolumeSpecName "kube-api-access-7mccx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.853275 4778 scope.go:117] "RemoveContainer" containerID="984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.927049 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.927079 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mccx\" (UniqueName: \"kubernetes.io/projected/89b39891-5207-4289-807f-57d00acb2937-kube-api-access-7mccx\") on node \"crc\" DevicePath \"\"" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.929411 4778 scope.go:117] "RemoveContainer" containerID="712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31" Mar 12 15:04:12 crc kubenswrapper[4778]: E0312 15:04:12.930541 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31\": container with ID starting with 712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31 not found: ID does not exist" containerID="712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.930572 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31"} err="failed to get container status \"712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31\": rpc error: code = NotFound desc = could not find container \"712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31\": container with ID starting with 712082342de67a11034de3ce859863eaaf1f71a829333a77a36e1df98eea2e31 not found: ID does not exist" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.930592 4778 scope.go:117] "RemoveContainer" containerID="805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e" Mar 12 15:04:12 crc kubenswrapper[4778]: E0312 15:04:12.930780 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e\": container with ID starting with 805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e not found: ID does not exist" containerID="805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.930799 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e"} err="failed to get container status \"805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e\": rpc error: code = NotFound desc = could not find container \"805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e\": container with ID starting with 805951d35b64df6e3a5d2f522d8ca4fce31a3962c15f8b2c7f8cc07a84f8dc1e not found: ID does not exist" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.930810 4778 scope.go:117] "RemoveContainer" containerID="984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b" Mar 12 15:04:12 crc kubenswrapper[4778]: E0312 15:04:12.931066 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b\": container with ID starting with 984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b not found: ID does not exist" containerID="984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.931086 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b"} err="failed to get container status \"984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b\": rpc error: code = NotFound desc = could not find container \"984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b\": container with ID starting with 984fb3456eea71c9cd7483dfcdb8376d81e856bd79aef84a885b305c1615885b not found: ID does not exist" Mar 12 15:04:12 crc kubenswrapper[4778]: I0312 15:04:12.989557 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89b39891-5207-4289-807f-57d00acb2937" (UID: "89b39891-5207-4289-807f-57d00acb2937"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:04:13 crc kubenswrapper[4778]: I0312 15:04:13.028508 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89b39891-5207-4289-807f-57d00acb2937-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:04:13 crc kubenswrapper[4778]: I0312 15:04:13.129396 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r99nz"] Mar 12 15:04:13 crc kubenswrapper[4778]: I0312 15:04:13.144978 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r99nz"] Mar 12 15:04:14 crc kubenswrapper[4778]: I0312 15:04:14.263730 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89b39891-5207-4289-807f-57d00acb2937" path="/var/lib/kubelet/pods/89b39891-5207-4289-807f-57d00acb2937/volumes" Mar 12 15:04:23 crc kubenswrapper[4778]: I0312 15:04:23.870762 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ec63cc68-6fde-419b-973c-91fc982e6a49/memcached/0.log" Mar 12 15:04:25 crc kubenswrapper[4778]: I0312 15:04:25.689286 4778 scope.go:117] "RemoveContainer" containerID="a5663c78d0886a072205a20f2510ea67c65b15026159b43c8bf3ff0037ce7434" Mar 12 15:04:28 crc kubenswrapper[4778]: I0312 15:04:28.558239 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:04:28 crc kubenswrapper[4778]: I0312 15:04:28.558660 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:04:38 crc kubenswrapper[4778]: I0312 15:04:38.556315 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/util/0.log" Mar 12 15:04:38 crc kubenswrapper[4778]: I0312 15:04:38.665541 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/util/0.log" Mar 12 15:04:38 crc kubenswrapper[4778]: I0312 15:04:38.720690 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/pull/0.log" Mar 12 15:04:38 crc kubenswrapper[4778]: I0312 15:04:38.761289 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/pull/0.log" Mar 12 15:04:38 crc kubenswrapper[4778]: I0312 15:04:38.932782 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/util/0.log" Mar 12 15:04:38 crc kubenswrapper[4778]: I0312 15:04:38.938676 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/pull/0.log" Mar 12 15:04:38 crc kubenswrapper[4778]: I0312 15:04:38.983893 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/extract/0.log" Mar 12 15:04:39 crc kubenswrapper[4778]: I0312 15:04:39.362306 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-9n6jv_ad531191-d7c5-4ef6-9929-3a5869751d98/manager/0.log" Mar 12 15:04:39 crc kubenswrapper[4778]: I0312 15:04:39.686642 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-gknp2_db7f6b97-2903-44bf-803f-c00c337400b9/manager/0.log" Mar 12 15:04:39 crc kubenswrapper[4778]: I0312 15:04:39.923913 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-b7tkm_e290c1ea-a39d-451e-a24b-17a2b61ff6f0/manager/0.log" Mar 12 15:04:40 crc kubenswrapper[4778]: I0312 15:04:40.135739 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-4jgt8_4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71/manager/0.log" Mar 12 15:04:40 crc kubenswrapper[4778]: I0312 15:04:40.773975 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-5d6qz_02bc06ca-f4e6-4fde-bd5d-882714d9652c/manager/0.log" Mar 12 15:04:40 crc kubenswrapper[4778]: I0312 15:04:40.853097 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-qb8s8_98a4cfbd-3037-48b5-9047-5d574dcc0aca/manager/0.log" Mar 12 15:04:41 crc kubenswrapper[4778]: I0312 15:04:41.223651 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-7dxdh_7e02c37f-b9af-46c9-a743-03ead9b060db/manager/0.log" Mar 12 15:04:41 crc kubenswrapper[4778]: I0312 15:04:41.712023 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-pn8tk_5e38a4fd-95f8-437b-923b-eca33b1387e6/manager/0.log" Mar 12 15:04:41 crc kubenswrapper[4778]: I0312 15:04:41.733266 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-xm4cc_c8818ac0-af8b-42c9-a923-425fe79ed203/manager/0.log" Mar 12 15:04:41 crc kubenswrapper[4778]: I0312 15:04:41.931861 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-jlbft_2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f/manager/0.log" Mar 12 15:04:42 crc kubenswrapper[4778]: I0312 15:04:42.302303 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-dd2ft_076835c9-352b-4e40-80c4-3bce3bb80594/manager/0.log" Mar 12 15:04:42 crc kubenswrapper[4778]: I0312 15:04:42.504215 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-686d5f9fbd-vv9rc_d7288cc6-4247-4d03-bd37-9862243bf613/manager/0.log" Mar 12 15:04:42 crc kubenswrapper[4778]: I0312 15:04:42.582055 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-cdgg9_1a01d06c-be6f-45de-a22d-c8f1058a3a84/manager/0.log" Mar 12 15:04:42 crc kubenswrapper[4778]: I0312 15:04:42.782502 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6_4f7d316e-6896-4f84-8423-6f79778c1c6b/manager/0.log" Mar 12 15:04:43 crc kubenswrapper[4778]: I0312 15:04:43.106202 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bc4df7446-x9bsl_34bbdc16-4518-4ee5-9a70-3cedcc5f0159/operator/0.log" Mar 12 15:04:43 crc kubenswrapper[4778]: I0312 15:04:43.253263 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-b2fsv_748546a6-1355-470f-b8d0-de395cf3f681/registry-server/0.log" Mar 12 15:04:43 crc kubenswrapper[4778]: I0312 15:04:43.418301 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-bbgmb_8d38fd7e-6fa1-4b0c-9c82-9c57290c7837/manager/0.log" Mar 12 15:04:43 crc kubenswrapper[4778]: I0312 15:04:43.544491 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-wvpf8_52524252-25bd-49e5-822e-3d4668aff2f9/manager/0.log" Mar 12 15:04:43 crc kubenswrapper[4778]: I0312 15:04:43.756444 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-shf7b_034f39d8-a33e-4e37-bcde-51fb22debdd1/operator/0.log" Mar 12 15:04:43 crc kubenswrapper[4778]: I0312 15:04:43.860798 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-84mps_64a36384-f2e6-4077-b2ca-de2a6ce6ea06/manager/0.log" Mar 12 15:04:44 crc kubenswrapper[4778]: I0312 15:04:44.108810 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-gfv5z_6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c/manager/0.log" Mar 12 15:04:44 crc kubenswrapper[4778]: I0312 15:04:44.388434 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-pcfrz_ed9b9271-4ae9-440a-9411-15d46267106e/manager/0.log" Mar 12 15:04:44 crc kubenswrapper[4778]: I0312 15:04:44.549836 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-2tjsk_8c02ecb8-0e15-4672-823a-c4437ca5bf8c/manager/0.log" Mar 12 15:04:44 crc kubenswrapper[4778]: I0312 15:04:44.729511 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5785b7957-7vdgw_d0784623-5f08-4109-9c7e-0a329210ce07/manager/0.log" Mar 12 15:04:49 crc kubenswrapper[4778]: I0312 15:04:49.515662 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-6h2c2_ffb8a1f4-4533-4368-a900-95d37fe1d3ad/manager/0.log" Mar 12 15:04:58 crc kubenswrapper[4778]: I0312 15:04:58.558286 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:04:58 crc kubenswrapper[4778]: I0312 15:04:58.559046 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:05:06 crc kubenswrapper[4778]: I0312 15:05:06.925945 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zkrqr_f799c7e9-1c31-40bc-9ece-06a086683a98/control-plane-machine-set-operator/0.log" Mar 12 15:05:07 crc kubenswrapper[4778]: I0312 15:05:07.114814 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-242cb_e2967620-e2ce-4763-8a6c-e5a37f3a1f98/machine-api-operator/0.log" Mar 12 15:05:07 crc kubenswrapper[4778]: I0312 15:05:07.160383 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-242cb_e2967620-e2ce-4763-8a6c-e5a37f3a1f98/kube-rbac-proxy/0.log" Mar 12 15:05:20 crc kubenswrapper[4778]: I0312 15:05:20.973009 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2774s_92b29110-f478-42b5-9a5f-c9330a3973b2/cert-manager-controller/0.log" Mar 12 15:05:21 crc kubenswrapper[4778]: I0312 15:05:21.258526 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jxs4g_804d0b09-6fab-4277-936a-5e0324d76b3e/cert-manager-cainjector/0.log" Mar 12 15:05:21 crc kubenswrapper[4778]: I0312 15:05:21.272000 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-ffh2x_45da07c5-bccb-4433-aa38-d9d2894f1b09/cert-manager-webhook/0.log" Mar 12 15:05:28 crc kubenswrapper[4778]: I0312 15:05:28.557605 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:05:28 crc kubenswrapper[4778]: I0312 15:05:28.558089 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:05:28 crc kubenswrapper[4778]: I0312 15:05:28.558144 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 15:05:28 crc kubenswrapper[4778]: I0312 15:05:28.558888 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"009e612c3693545ba4a1988aa00993d05612427ec6eb485b08b455b35968f1ab"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:05:28 crc kubenswrapper[4778]: I0312 15:05:28.558956 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://009e612c3693545ba4a1988aa00993d05612427ec6eb485b08b455b35968f1ab" gracePeriod=600 Mar 12 15:05:29 crc kubenswrapper[4778]: I0312 15:05:29.508568 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="009e612c3693545ba4a1988aa00993d05612427ec6eb485b08b455b35968f1ab" exitCode=0 Mar 12 15:05:29 crc kubenswrapper[4778]: I0312 15:05:29.508648 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"009e612c3693545ba4a1988aa00993d05612427ec6eb485b08b455b35968f1ab"} Mar 12 15:05:29 crc kubenswrapper[4778]: I0312 15:05:29.509119 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930"} Mar 12 15:05:29 crc kubenswrapper[4778]: I0312 15:05:29.509146 4778 scope.go:117] "RemoveContainer" containerID="505b7ca3387092da837254cfad64e23448af9dbba84199bbb89de928d39d31e3" Mar 12 15:05:36 crc kubenswrapper[4778]: I0312 15:05:36.267462 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-jbxx4_af2d568b-9719-4da9-b0e8-e28d314ed860/nmstate-console-plugin/0.log" Mar 12 15:05:36 crc kubenswrapper[4778]: I0312 15:05:36.437926 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rbsjl_d8309ffe-a26c-44a8-84e2-7b7ec10982a8/nmstate-handler/0.log" Mar 12 15:05:36 crc kubenswrapper[4778]: I0312 15:05:36.504605 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-b2s5h_7855d7b1-c7cf-4b63-9313-051a391fcf43/nmstate-metrics/0.log" Mar 12 15:05:36 crc kubenswrapper[4778]: I0312 15:05:36.509127 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-b2s5h_7855d7b1-c7cf-4b63-9313-051a391fcf43/kube-rbac-proxy/0.log" Mar 12 15:05:36 crc kubenswrapper[4778]: I0312 15:05:36.638436 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-hxzd6_fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3/nmstate-operator/0.log" Mar 12 15:05:36 crc kubenswrapper[4778]: I0312 15:05:36.920957 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-94rbc_ef796a94-b10d-4d18-ae88-f64bc3a6b87d/nmstate-webhook/0.log" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.150136 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7wlhj"] Mar 12 15:06:00 crc kubenswrapper[4778]: E0312 15:06:00.151332 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc35b0a-1b16-4db8-adef-8a6afd6ae934" containerName="oc" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.151352 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc35b0a-1b16-4db8-adef-8a6afd6ae934" containerName="oc" Mar 12 15:06:00 crc kubenswrapper[4778]: E0312 15:06:00.151381 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="registry-server" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.151390 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="registry-server" Mar 12 15:06:00 crc kubenswrapper[4778]: E0312 15:06:00.151410 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="extract-utilities" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.151419 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="extract-utilities" Mar 12 15:06:00 crc kubenswrapper[4778]: E0312 15:06:00.151450 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="extract-content" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.151458 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="extract-content" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.151689 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b39891-5207-4289-807f-57d00acb2937" containerName="registry-server" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.151715 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc35b0a-1b16-4db8-adef-8a6afd6ae934" containerName="oc" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.152697 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555466-7wlhj" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.157855 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.158307 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.158519 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.163891 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7wlhj"] Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.266312 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxck\" (UniqueName: \"kubernetes.io/projected/c58e0f99-4ece-49ec-9c47-b82055df7d48-kube-api-access-bxxck\") pod \"auto-csr-approver-29555466-7wlhj\" (UID: \"c58e0f99-4ece-49ec-9c47-b82055df7d48\") " pod="openshift-infra/auto-csr-approver-29555466-7wlhj" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.368757 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxxck\" (UniqueName: \"kubernetes.io/projected/c58e0f99-4ece-49ec-9c47-b82055df7d48-kube-api-access-bxxck\") pod \"auto-csr-approver-29555466-7wlhj\" (UID: \"c58e0f99-4ece-49ec-9c47-b82055df7d48\") " pod="openshift-infra/auto-csr-approver-29555466-7wlhj" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.387335 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxxck\" (UniqueName: \"kubernetes.io/projected/c58e0f99-4ece-49ec-9c47-b82055df7d48-kube-api-access-bxxck\") pod \"auto-csr-approver-29555466-7wlhj\" (UID: \"c58e0f99-4ece-49ec-9c47-b82055df7d48\") " pod="openshift-infra/auto-csr-approver-29555466-7wlhj" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.475695 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555466-7wlhj" Mar 12 15:06:00 crc kubenswrapper[4778]: I0312 15:06:00.920829 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7wlhj"] Mar 12 15:06:01 crc kubenswrapper[4778]: I0312 15:06:01.836836 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555466-7wlhj" event={"ID":"c58e0f99-4ece-49ec-9c47-b82055df7d48","Type":"ContainerStarted","Data":"65f40878aba2160868688af649b2ce7d73b76d4df721a61bcd9c29b8d55924bf"} Mar 12 15:06:02 crc kubenswrapper[4778]: I0312 15:06:02.852460 4778 generic.go:334] "Generic (PLEG): container finished" podID="c58e0f99-4ece-49ec-9c47-b82055df7d48" containerID="8984f879d02ecea61666d68e6857174dd681c26238a5c27d5a617cc7dccda3db" exitCode=0 Mar 12 15:06:02 crc kubenswrapper[4778]: I0312 15:06:02.852949 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555466-7wlhj" event={"ID":"c58e0f99-4ece-49ec-9c47-b82055df7d48","Type":"ContainerDied","Data":"8984f879d02ecea61666d68e6857174dd681c26238a5c27d5a617cc7dccda3db"} Mar 12 15:06:04 crc kubenswrapper[4778]: I0312 15:06:04.266899 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555466-7wlhj" Mar 12 15:06:04 crc kubenswrapper[4778]: I0312 15:06:04.355467 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxxck\" (UniqueName: \"kubernetes.io/projected/c58e0f99-4ece-49ec-9c47-b82055df7d48-kube-api-access-bxxck\") pod \"c58e0f99-4ece-49ec-9c47-b82055df7d48\" (UID: \"c58e0f99-4ece-49ec-9c47-b82055df7d48\") " Mar 12 15:06:04 crc kubenswrapper[4778]: I0312 15:06:04.361011 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58e0f99-4ece-49ec-9c47-b82055df7d48-kube-api-access-bxxck" (OuterVolumeSpecName: "kube-api-access-bxxck") pod "c58e0f99-4ece-49ec-9c47-b82055df7d48" (UID: "c58e0f99-4ece-49ec-9c47-b82055df7d48"). InnerVolumeSpecName "kube-api-access-bxxck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:06:04 crc kubenswrapper[4778]: I0312 15:06:04.458215 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxxck\" (UniqueName: \"kubernetes.io/projected/c58e0f99-4ece-49ec-9c47-b82055df7d48-kube-api-access-bxxck\") on node \"crc\" DevicePath \"\"" Mar 12 15:06:04 crc kubenswrapper[4778]: I0312 15:06:04.873799 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555466-7wlhj" event={"ID":"c58e0f99-4ece-49ec-9c47-b82055df7d48","Type":"ContainerDied","Data":"65f40878aba2160868688af649b2ce7d73b76d4df721a61bcd9c29b8d55924bf"} Mar 12 15:06:04 crc kubenswrapper[4778]: I0312 15:06:04.874076 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65f40878aba2160868688af649b2ce7d73b76d4df721a61bcd9c29b8d55924bf" Mar 12 15:06:04 crc kubenswrapper[4778]: I0312 15:06:04.873835 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555466-7wlhj" Mar 12 15:06:05 crc kubenswrapper[4778]: I0312 15:06:05.346943 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555460-6bwr2"] Mar 12 15:06:05 crc kubenswrapper[4778]: I0312 15:06:05.356747 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555460-6bwr2"] Mar 12 15:06:06 crc kubenswrapper[4778]: I0312 15:06:06.264176 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7baca351-722e-4d7e-972e-04513fae6e0b" path="/var/lib/kubelet/pods/7baca351-722e-4d7e-972e-04513fae6e0b/volumes" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.313974 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-mnjql_14351deb-3286-4464-8eac-6bb116a9ebce/kube-rbac-proxy/0.log" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.402503 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-mnjql_14351deb-3286-4464-8eac-6bb116a9ebce/controller/0.log" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.489330 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-x2n7f_2f214887-d638-42fa-aa86-1518cfae600d/frr-k8s-webhook-server/0.log" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.569820 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-frr-files/0.log" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.761910 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-reloader/0.log" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.803886 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-frr-files/0.log" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.804567 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-reloader/0.log" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.808262 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-metrics/0.log" Mar 12 15:06:07 crc kubenswrapper[4778]: I0312 15:06:07.983580 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-frr-files/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.031079 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-reloader/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.031445 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-metrics/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.081079 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-metrics/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.292944 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-reloader/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.296527 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/controller/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.325079 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-frr-files/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.327211 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-metrics/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.490302 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/kube-rbac-proxy-frr/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.505001 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/kube-rbac-proxy/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.534786 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/frr-metrics/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.736759 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/reloader/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.772304 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-54d5c4b6c7-gh4lx_a5a6d344-0a75-422d-acd9-fe8887b03110/manager/0.log" Mar 12 15:06:08 crc kubenswrapper[4778]: I0312 15:06:08.924178 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-68f5db54d6-zstmq_6ac207b6-1710-47af-8fe9-b0c3adbce0ab/webhook-server/0.log" Mar 12 15:06:09 crc kubenswrapper[4778]: I0312 15:06:09.111606 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k7nvk_f2e1d11e-8f27-498d-8d45-ac0e14a796fe/kube-rbac-proxy/0.log" Mar 12 15:06:09 crc kubenswrapper[4778]: I0312 15:06:09.680663 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k7nvk_f2e1d11e-8f27-498d-8d45-ac0e14a796fe/speaker/0.log" Mar 12 15:06:10 crc kubenswrapper[4778]: I0312 15:06:10.709840 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/frr/0.log" Mar 12 15:06:23 crc kubenswrapper[4778]: I0312 15:06:23.260665 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/util/0.log" Mar 12 15:06:23 crc kubenswrapper[4778]: I0312 15:06:23.469811 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/util/0.log" Mar 12 15:06:23 crc kubenswrapper[4778]: I0312 15:06:23.486990 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/pull/0.log" Mar 12 15:06:23 crc kubenswrapper[4778]: I0312 15:06:23.553979 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/pull/0.log" Mar 12 15:06:23 crc kubenswrapper[4778]: I0312 15:06:23.770481 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/pull/0.log" Mar 12 15:06:23 crc kubenswrapper[4778]: I0312 15:06:23.775100 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/util/0.log" Mar 12 15:06:23 crc kubenswrapper[4778]: I0312 15:06:23.802922 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/extract/0.log" Mar 12 15:06:23 crc kubenswrapper[4778]: I0312 15:06:23.957649 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/util/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.140388 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/pull/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.156467 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/util/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.175375 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/pull/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.397088 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/pull/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.404213 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/extract/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.408632 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/util/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.613624 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-utilities/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.777360 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-utilities/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.786146 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-content/0.log" Mar 12 15:06:24 crc kubenswrapper[4778]: I0312 15:06:24.788319 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-content/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.007179 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-utilities/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.021887 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-content/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.211183 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-utilities/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.447473 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-utilities/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.454714 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-content/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.526116 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-content/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.734640 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-content/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.765702 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-utilities/0.log" Mar 12 15:06:25 crc kubenswrapper[4778]: I0312 15:06:25.807917 4778 scope.go:117] "RemoveContainer" containerID="64150eeb0f1f171e7d11ada7712192a8c533967a0e598d41c325a6422f027d7a" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.006474 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hvmk8_3b062c23-5acd-430d-aa6c-24b48a725594/marketplace-operator/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.223376 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/registry-server/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.269154 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-utilities/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.342025 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/registry-server/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.439733 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-content/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.479463 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-utilities/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.489383 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-content/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.652984 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-utilities/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.680415 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-content/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.890004 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/registry-server/0.log" Mar 12 15:06:26 crc kubenswrapper[4778]: I0312 15:06:26.910405 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-utilities/0.log" Mar 12 15:06:27 crc kubenswrapper[4778]: I0312 15:06:27.145789 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-utilities/0.log" Mar 12 15:06:27 crc kubenswrapper[4778]: I0312 15:06:27.145807 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-content/0.log" Mar 12 15:06:27 crc kubenswrapper[4778]: I0312 15:06:27.146855 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-content/0.log" Mar 12 15:06:27 crc kubenswrapper[4778]: I0312 15:06:27.336107 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-content/0.log" Mar 12 15:06:27 crc kubenswrapper[4778]: I0312 15:06:27.380165 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-utilities/0.log" Mar 12 15:06:27 crc kubenswrapper[4778]: I0312 15:06:27.453480 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/registry-server/0.log" Mar 12 15:06:45 crc kubenswrapper[4778]: E0312 15:06:45.498228 4778 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.32:54840->38.129.56.32:35979: write tcp 38.129.56.32:54840->38.129.56.32:35979: write: broken pipe Mar 12 15:07:28 crc kubenswrapper[4778]: I0312 15:07:28.558393 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:07:28 crc kubenswrapper[4778]: I0312 15:07:28.559100 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:07:58 crc kubenswrapper[4778]: I0312 15:07:58.557799 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:07:58 crc kubenswrapper[4778]: I0312 15:07:58.558376 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.166920 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555468-tpk68"] Mar 12 15:08:00 crc kubenswrapper[4778]: E0312 15:08:00.167930 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58e0f99-4ece-49ec-9c47-b82055df7d48" containerName="oc" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.167962 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58e0f99-4ece-49ec-9c47-b82055df7d48" containerName="oc" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.168392 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58e0f99-4ece-49ec-9c47-b82055df7d48" containerName="oc" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.169539 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555468-tpk68" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.172244 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.172674 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.174063 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.186856 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555468-tpk68"] Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.267213 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj55d\" (UniqueName: \"kubernetes.io/projected/1760b72d-ab0b-489f-b263-7279ce51dc5f-kube-api-access-zj55d\") pod \"auto-csr-approver-29555468-tpk68\" (UID: \"1760b72d-ab0b-489f-b263-7279ce51dc5f\") " pod="openshift-infra/auto-csr-approver-29555468-tpk68" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.368682 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj55d\" (UniqueName: \"kubernetes.io/projected/1760b72d-ab0b-489f-b263-7279ce51dc5f-kube-api-access-zj55d\") pod \"auto-csr-approver-29555468-tpk68\" (UID: \"1760b72d-ab0b-489f-b263-7279ce51dc5f\") " pod="openshift-infra/auto-csr-approver-29555468-tpk68" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.394780 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj55d\" (UniqueName: \"kubernetes.io/projected/1760b72d-ab0b-489f-b263-7279ce51dc5f-kube-api-access-zj55d\") pod \"auto-csr-approver-29555468-tpk68\" (UID: \"1760b72d-ab0b-489f-b263-7279ce51dc5f\") " pod="openshift-infra/auto-csr-approver-29555468-tpk68" Mar 12 15:08:00 crc kubenswrapper[4778]: I0312 15:08:00.492389 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555468-tpk68" Mar 12 15:08:01 crc kubenswrapper[4778]: I0312 15:08:01.011353 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555468-tpk68"] Mar 12 15:08:02 crc kubenswrapper[4778]: I0312 15:08:02.041233 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555468-tpk68" event={"ID":"1760b72d-ab0b-489f-b263-7279ce51dc5f","Type":"ContainerStarted","Data":"c2f2c23d12c7f8a6128ee9c89c036a933a8cc16f980be4d5025a97fae92ade2a"} Mar 12 15:08:03 crc kubenswrapper[4778]: I0312 15:08:03.053710 4778 generic.go:334] "Generic (PLEG): container finished" podID="1760b72d-ab0b-489f-b263-7279ce51dc5f" containerID="98229a921540253e11a1f90e715794eabbd7a1c19952afa6969c95cd62f6a069" exitCode=0 Mar 12 15:08:03 crc kubenswrapper[4778]: I0312 15:08:03.054152 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555468-tpk68" event={"ID":"1760b72d-ab0b-489f-b263-7279ce51dc5f","Type":"ContainerDied","Data":"98229a921540253e11a1f90e715794eabbd7a1c19952afa6969c95cd62f6a069"} Mar 12 15:08:04 crc kubenswrapper[4778]: I0312 15:08:04.384518 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555468-tpk68" Mar 12 15:08:04 crc kubenswrapper[4778]: I0312 15:08:04.556379 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj55d\" (UniqueName: \"kubernetes.io/projected/1760b72d-ab0b-489f-b263-7279ce51dc5f-kube-api-access-zj55d\") pod \"1760b72d-ab0b-489f-b263-7279ce51dc5f\" (UID: \"1760b72d-ab0b-489f-b263-7279ce51dc5f\") " Mar 12 15:08:04 crc kubenswrapper[4778]: I0312 15:08:04.564540 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1760b72d-ab0b-489f-b263-7279ce51dc5f-kube-api-access-zj55d" (OuterVolumeSpecName: "kube-api-access-zj55d") pod "1760b72d-ab0b-489f-b263-7279ce51dc5f" (UID: "1760b72d-ab0b-489f-b263-7279ce51dc5f"). InnerVolumeSpecName "kube-api-access-zj55d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:04 crc kubenswrapper[4778]: I0312 15:08:04.658372 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj55d\" (UniqueName: \"kubernetes.io/projected/1760b72d-ab0b-489f-b263-7279ce51dc5f-kube-api-access-zj55d\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:05 crc kubenswrapper[4778]: I0312 15:08:05.082960 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555468-tpk68" event={"ID":"1760b72d-ab0b-489f-b263-7279ce51dc5f","Type":"ContainerDied","Data":"c2f2c23d12c7f8a6128ee9c89c036a933a8cc16f980be4d5025a97fae92ade2a"} Mar 12 15:08:05 crc kubenswrapper[4778]: I0312 15:08:05.083393 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f2c23d12c7f8a6128ee9c89c036a933a8cc16f980be4d5025a97fae92ade2a" Mar 12 15:08:05 crc kubenswrapper[4778]: I0312 15:08:05.083052 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555468-tpk68" Mar 12 15:08:05 crc kubenswrapper[4778]: I0312 15:08:05.455275 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555462-dvtsm"] Mar 12 15:08:05 crc kubenswrapper[4778]: I0312 15:08:05.464562 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555462-dvtsm"] Mar 12 15:08:06 crc kubenswrapper[4778]: I0312 15:08:06.268330 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb23378-6e3e-4c63-919d-47ce1d17dd7b" path="/var/lib/kubelet/pods/cbb23378-6e3e-4c63-919d-47ce1d17dd7b/volumes" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.738801 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qcswd"] Mar 12 15:08:07 crc kubenswrapper[4778]: E0312 15:08:07.739628 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1760b72d-ab0b-489f-b263-7279ce51dc5f" containerName="oc" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.739652 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="1760b72d-ab0b-489f-b263-7279ce51dc5f" containerName="oc" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.739975 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="1760b72d-ab0b-489f-b263-7279ce51dc5f" containerName="oc" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.742500 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.773472 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcswd"] Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.838924 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cfr\" (UniqueName: \"kubernetes.io/projected/193a1938-83bf-48dc-abd9-ecc2f202db8d-kube-api-access-g6cfr\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.839012 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-catalog-content\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.839143 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-utilities\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.941042 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-utilities\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.941151 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cfr\" (UniqueName: \"kubernetes.io/projected/193a1938-83bf-48dc-abd9-ecc2f202db8d-kube-api-access-g6cfr\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.941231 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-catalog-content\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.941632 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-utilities\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.941722 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-catalog-content\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:07 crc kubenswrapper[4778]: I0312 15:08:07.960489 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cfr\" (UniqueName: \"kubernetes.io/projected/193a1938-83bf-48dc-abd9-ecc2f202db8d-kube-api-access-g6cfr\") pod \"certified-operators-qcswd\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:08 crc kubenswrapper[4778]: I0312 15:08:08.083428 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:08 crc kubenswrapper[4778]: I0312 15:08:08.371530 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcswd"] Mar 12 15:08:08 crc kubenswrapper[4778]: W0312 15:08:08.374542 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod193a1938_83bf_48dc_abd9_ecc2f202db8d.slice/crio-f6806f156fc16e1a064496e9b8083fb2d3b7f16ce78f083b0ab2e5de6a315557 WatchSource:0}: Error finding container f6806f156fc16e1a064496e9b8083fb2d3b7f16ce78f083b0ab2e5de6a315557: Status 404 returned error can't find the container with id f6806f156fc16e1a064496e9b8083fb2d3b7f16ce78f083b0ab2e5de6a315557 Mar 12 15:08:09 crc kubenswrapper[4778]: I0312 15:08:09.130393 4778 generic.go:334] "Generic (PLEG): container finished" podID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerID="33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2" exitCode=0 Mar 12 15:08:09 crc kubenswrapper[4778]: I0312 15:08:09.130712 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcswd" event={"ID":"193a1938-83bf-48dc-abd9-ecc2f202db8d","Type":"ContainerDied","Data":"33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2"} Mar 12 15:08:09 crc kubenswrapper[4778]: I0312 15:08:09.130817 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcswd" event={"ID":"193a1938-83bf-48dc-abd9-ecc2f202db8d","Type":"ContainerStarted","Data":"f6806f156fc16e1a064496e9b8083fb2d3b7f16ce78f083b0ab2e5de6a315557"} Mar 12 15:08:11 crc kubenswrapper[4778]: I0312 15:08:11.158036 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcswd" event={"ID":"193a1938-83bf-48dc-abd9-ecc2f202db8d","Type":"ContainerStarted","Data":"c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a"} Mar 12 15:08:12 crc kubenswrapper[4778]: I0312 15:08:12.169671 4778 generic.go:334] "Generic (PLEG): container finished" podID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerID="c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a" exitCode=0 Mar 12 15:08:12 crc kubenswrapper[4778]: I0312 15:08:12.169716 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcswd" event={"ID":"193a1938-83bf-48dc-abd9-ecc2f202db8d","Type":"ContainerDied","Data":"c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a"} Mar 12 15:08:13 crc kubenswrapper[4778]: I0312 15:08:13.180408 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcswd" event={"ID":"193a1938-83bf-48dc-abd9-ecc2f202db8d","Type":"ContainerStarted","Data":"5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6"} Mar 12 15:08:13 crc kubenswrapper[4778]: I0312 15:08:13.211069 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qcswd" podStartSLOduration=2.666458982 podStartE2EDuration="6.211049135s" podCreationTimestamp="2026-03-12 15:08:07 +0000 UTC" firstStartedPulling="2026-03-12 15:08:09.132985257 +0000 UTC m=+7107.581680653" lastFinishedPulling="2026-03-12 15:08:12.67757538 +0000 UTC m=+7111.126270806" observedRunningTime="2026-03-12 15:08:13.206875716 +0000 UTC m=+7111.655571112" watchObservedRunningTime="2026-03-12 15:08:13.211049135 +0000 UTC m=+7111.659744541" Mar 12 15:08:18 crc kubenswrapper[4778]: I0312 15:08:18.084530 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:18 crc kubenswrapper[4778]: I0312 15:08:18.086436 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:18 crc kubenswrapper[4778]: I0312 15:08:18.148813 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:18 crc kubenswrapper[4778]: I0312 15:08:18.282960 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:18 crc kubenswrapper[4778]: I0312 15:08:18.396459 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qcswd"] Mar 12 15:08:20 crc kubenswrapper[4778]: I0312 15:08:20.255102 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qcswd" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerName="registry-server" containerID="cri-o://5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6" gracePeriod=2 Mar 12 15:08:20 crc kubenswrapper[4778]: I0312 15:08:20.770272 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:20 crc kubenswrapper[4778]: I0312 15:08:20.917938 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cfr\" (UniqueName: \"kubernetes.io/projected/193a1938-83bf-48dc-abd9-ecc2f202db8d-kube-api-access-g6cfr\") pod \"193a1938-83bf-48dc-abd9-ecc2f202db8d\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " Mar 12 15:08:20 crc kubenswrapper[4778]: I0312 15:08:20.918280 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-utilities\") pod \"193a1938-83bf-48dc-abd9-ecc2f202db8d\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " Mar 12 15:08:20 crc kubenswrapper[4778]: I0312 15:08:20.918439 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-catalog-content\") pod \"193a1938-83bf-48dc-abd9-ecc2f202db8d\" (UID: \"193a1938-83bf-48dc-abd9-ecc2f202db8d\") " Mar 12 15:08:20 crc kubenswrapper[4778]: I0312 15:08:20.922402 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-utilities" (OuterVolumeSpecName: "utilities") pod "193a1938-83bf-48dc-abd9-ecc2f202db8d" (UID: "193a1938-83bf-48dc-abd9-ecc2f202db8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:20 crc kubenswrapper[4778]: I0312 15:08:20.926710 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193a1938-83bf-48dc-abd9-ecc2f202db8d-kube-api-access-g6cfr" (OuterVolumeSpecName: "kube-api-access-g6cfr") pod "193a1938-83bf-48dc-abd9-ecc2f202db8d" (UID: "193a1938-83bf-48dc-abd9-ecc2f202db8d"). InnerVolumeSpecName "kube-api-access-g6cfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.021954 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.022318 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cfr\" (UniqueName: \"kubernetes.io/projected/193a1938-83bf-48dc-abd9-ecc2f202db8d-kube-api-access-g6cfr\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.267206 4778 generic.go:334] "Generic (PLEG): container finished" podID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerID="5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6" exitCode=0 Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.267269 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcswd" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.267292 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcswd" event={"ID":"193a1938-83bf-48dc-abd9-ecc2f202db8d","Type":"ContainerDied","Data":"5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6"} Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.268597 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcswd" event={"ID":"193a1938-83bf-48dc-abd9-ecc2f202db8d","Type":"ContainerDied","Data":"f6806f156fc16e1a064496e9b8083fb2d3b7f16ce78f083b0ab2e5de6a315557"} Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.268619 4778 scope.go:117] "RemoveContainer" containerID="5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.320092 4778 scope.go:117] "RemoveContainer" containerID="c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.349001 4778 scope.go:117] "RemoveContainer" containerID="33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.413876 4778 scope.go:117] "RemoveContainer" containerID="5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6" Mar 12 15:08:21 crc kubenswrapper[4778]: E0312 15:08:21.414212 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6\": container with ID starting with 5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6 not found: ID does not exist" containerID="5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.414245 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6"} err="failed to get container status \"5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6\": rpc error: code = NotFound desc = could not find container \"5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6\": container with ID starting with 5f7899fb9193c2e9377a242c9cfedeeacd5e89ca82449d566c5a464428b75dc6 not found: ID does not exist" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.414269 4778 scope.go:117] "RemoveContainer" containerID="c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a" Mar 12 15:08:21 crc kubenswrapper[4778]: E0312 15:08:21.414679 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a\": container with ID starting with c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a not found: ID does not exist" containerID="c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.414699 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a"} err="failed to get container status \"c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a\": rpc error: code = NotFound desc = could not find container \"c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a\": container with ID starting with c5a282a10d6fde57de226d764c03613a263de2ccb99a3386ebf22eb1e1993b4a not found: ID does not exist" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.414712 4778 scope.go:117] "RemoveContainer" containerID="33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2" Mar 12 15:08:21 crc kubenswrapper[4778]: E0312 15:08:21.415167 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2\": container with ID starting with 33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2 not found: ID does not exist" containerID="33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2" Mar 12 15:08:21 crc kubenswrapper[4778]: I0312 15:08:21.415209 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2"} err="failed to get container status \"33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2\": rpc error: code = NotFound desc = could not find container \"33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2\": container with ID starting with 33ad6411c9f8c1d5d5bb41623664b1079e456cd6577aad63ed94d81ea4af1da2 not found: ID does not exist" Mar 12 15:08:22 crc kubenswrapper[4778]: I0312 15:08:22.494591 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "193a1938-83bf-48dc-abd9-ecc2f202db8d" (UID: "193a1938-83bf-48dc-abd9-ecc2f202db8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:08:22 crc kubenswrapper[4778]: I0312 15:08:22.557683 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193a1938-83bf-48dc-abd9-ecc2f202db8d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:08:22 crc kubenswrapper[4778]: I0312 15:08:22.808524 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qcswd"] Mar 12 15:08:22 crc kubenswrapper[4778]: I0312 15:08:22.818282 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qcswd"] Mar 12 15:08:24 crc kubenswrapper[4778]: I0312 15:08:24.275877 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" path="/var/lib/kubelet/pods/193a1938-83bf-48dc-abd9-ecc2f202db8d/volumes" Mar 12 15:08:25 crc kubenswrapper[4778]: I0312 15:08:25.939899 4778 scope.go:117] "RemoveContainer" containerID="bc1c69d732ac8380ce4ad84b76897a91373ec3edde2343f57d27f4105f4594eb" Mar 12 15:08:26 crc kubenswrapper[4778]: I0312 15:08:26.005434 4778 scope.go:117] "RemoveContainer" containerID="32e69f17a15da926e453ec4388e2482d274516709eb37f6124496feae6a6509f" Mar 12 15:08:28 crc kubenswrapper[4778]: I0312 15:08:28.557793 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:08:28 crc kubenswrapper[4778]: I0312 15:08:28.558386 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:08:28 crc kubenswrapper[4778]: I0312 15:08:28.558440 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 15:08:28 crc kubenswrapper[4778]: I0312 15:08:28.559240 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:08:28 crc kubenswrapper[4778]: I0312 15:08:28.559298 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" gracePeriod=600 Mar 12 15:08:28 crc kubenswrapper[4778]: E0312 15:08:28.695040 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:08:29 crc kubenswrapper[4778]: I0312 15:08:29.353265 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" exitCode=0 Mar 12 15:08:29 crc kubenswrapper[4778]: I0312 15:08:29.353327 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930"} Mar 12 15:08:29 crc kubenswrapper[4778]: I0312 15:08:29.353366 4778 scope.go:117] "RemoveContainer" containerID="009e612c3693545ba4a1988aa00993d05612427ec6eb485b08b455b35968f1ab" Mar 12 15:08:29 crc kubenswrapper[4778]: I0312 15:08:29.354135 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:08:29 crc kubenswrapper[4778]: E0312 15:08:29.354569 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:08:41 crc kubenswrapper[4778]: I0312 15:08:41.253895 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:08:41 crc kubenswrapper[4778]: E0312 15:08:41.254601 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:08:53 crc kubenswrapper[4778]: I0312 15:08:53.254221 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:08:53 crc kubenswrapper[4778]: E0312 15:08:53.254832 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:09:05 crc kubenswrapper[4778]: I0312 15:09:05.254480 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:09:05 crc kubenswrapper[4778]: E0312 15:09:05.255175 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:09:12 crc kubenswrapper[4778]: I0312 15:09:12.799107 4778 generic.go:334] "Generic (PLEG): container finished" podID="dd2baa0b-6680-41af-8231-e30368cb0090" containerID="099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb" exitCode=0 Mar 12 15:09:12 crc kubenswrapper[4778]: I0312 15:09:12.799174 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" event={"ID":"dd2baa0b-6680-41af-8231-e30368cb0090","Type":"ContainerDied","Data":"099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb"} Mar 12 15:09:12 crc kubenswrapper[4778]: I0312 15:09:12.800326 4778 scope.go:117] "RemoveContainer" containerID="099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb" Mar 12 15:09:13 crc kubenswrapper[4778]: I0312 15:09:13.751703 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rkpvq_must-gather-6d9ls_dd2baa0b-6680-41af-8231-e30368cb0090/gather/0.log" Mar 12 15:09:20 crc kubenswrapper[4778]: I0312 15:09:20.254734 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:09:20 crc kubenswrapper[4778]: E0312 15:09:20.255626 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.239094 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rkpvq/must-gather-6d9ls"] Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.239675 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" podUID="dd2baa0b-6680-41af-8231-e30368cb0090" containerName="copy" containerID="cri-o://d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b" gracePeriod=2 Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.257640 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rkpvq/must-gather-6d9ls"] Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.715857 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rkpvq_must-gather-6d9ls_dd2baa0b-6680-41af-8231-e30368cb0090/copy/0.log" Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.716704 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.806681 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd2baa0b-6680-41af-8231-e30368cb0090-must-gather-output\") pod \"dd2baa0b-6680-41af-8231-e30368cb0090\" (UID: \"dd2baa0b-6680-41af-8231-e30368cb0090\") " Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.806761 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twzlp\" (UniqueName: \"kubernetes.io/projected/dd2baa0b-6680-41af-8231-e30368cb0090-kube-api-access-twzlp\") pod \"dd2baa0b-6680-41af-8231-e30368cb0090\" (UID: \"dd2baa0b-6680-41af-8231-e30368cb0090\") " Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.813132 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2baa0b-6680-41af-8231-e30368cb0090-kube-api-access-twzlp" (OuterVolumeSpecName: "kube-api-access-twzlp") pod "dd2baa0b-6680-41af-8231-e30368cb0090" (UID: "dd2baa0b-6680-41af-8231-e30368cb0090"). InnerVolumeSpecName "kube-api-access-twzlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.909665 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twzlp\" (UniqueName: \"kubernetes.io/projected/dd2baa0b-6680-41af-8231-e30368cb0090-kube-api-access-twzlp\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.916844 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rkpvq_must-gather-6d9ls_dd2baa0b-6680-41af-8231-e30368cb0090/copy/0.log" Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.917161 4778 generic.go:334] "Generic (PLEG): container finished" podID="dd2baa0b-6680-41af-8231-e30368cb0090" containerID="d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b" exitCode=143 Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.917233 4778 scope.go:117] "RemoveContainer" containerID="d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b" Mar 12 15:09:23 crc kubenswrapper[4778]: I0312 15:09:23.917370 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rkpvq/must-gather-6d9ls" Mar 12 15:09:24 crc kubenswrapper[4778]: I0312 15:09:24.052960 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2baa0b-6680-41af-8231-e30368cb0090-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "dd2baa0b-6680-41af-8231-e30368cb0090" (UID: "dd2baa0b-6680-41af-8231-e30368cb0090"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:09:24 crc kubenswrapper[4778]: I0312 15:09:24.471370 4778 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd2baa0b-6680-41af-8231-e30368cb0090-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 12 15:09:24 crc kubenswrapper[4778]: I0312 15:09:24.488829 4778 scope.go:117] "RemoveContainer" containerID="099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb" Mar 12 15:09:24 crc kubenswrapper[4778]: I0312 15:09:24.491576 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2baa0b-6680-41af-8231-e30368cb0090" path="/var/lib/kubelet/pods/dd2baa0b-6680-41af-8231-e30368cb0090/volumes" Mar 12 15:09:24 crc kubenswrapper[4778]: I0312 15:09:24.579797 4778 scope.go:117] "RemoveContainer" containerID="d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b" Mar 12 15:09:24 crc kubenswrapper[4778]: E0312 15:09:24.581646 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b\": container with ID starting with d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b not found: ID does not exist" containerID="d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b" Mar 12 15:09:24 crc kubenswrapper[4778]: I0312 15:09:24.581706 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b"} err="failed to get container status \"d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b\": rpc error: code = NotFound desc = could not find container \"d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b\": container with ID starting with d75c39511d2814b29f7a8f3d56db17a77d40e26925d76443f57f610bafcb652b not found: ID does not exist" Mar 12 15:09:24 crc kubenswrapper[4778]: I0312 15:09:24.581734 4778 scope.go:117] "RemoveContainer" containerID="099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb" Mar 12 15:09:24 crc kubenswrapper[4778]: E0312 15:09:24.582083 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb\": container with ID starting with 099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb not found: ID does not exist" containerID="099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb" Mar 12 15:09:24 crc kubenswrapper[4778]: I0312 15:09:24.582109 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb"} err="failed to get container status \"099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb\": rpc error: code = NotFound desc = could not find container \"099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb\": container with ID starting with 099ea78abdd9e205689d8ecdedd4eb5e53feb9c31e850ed5f759eb9dcba848eb not found: ID does not exist" Mar 12 15:09:26 crc kubenswrapper[4778]: I0312 15:09:26.131126 4778 scope.go:117] "RemoveContainer" containerID="74340f2038c644c2a2c001699df4f77fd8e1cf73ce4885bad06a1749c4f74a6f" Mar 12 15:09:26 crc kubenswrapper[4778]: I0312 15:09:26.151466 4778 scope.go:117] "RemoveContainer" containerID="e0b26a87b52c43c4e608ab6f59fc6362851c87a5e28dc029fccea83b1c3e5e7d" Mar 12 15:09:33 crc kubenswrapper[4778]: I0312 15:09:33.254446 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:09:33 crc kubenswrapper[4778]: E0312 15:09:33.255543 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:09:45 crc kubenswrapper[4778]: I0312 15:09:45.254536 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:09:45 crc kubenswrapper[4778]: E0312 15:09:45.257214 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:09:59 crc kubenswrapper[4778]: I0312 15:09:59.254655 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:09:59 crc kubenswrapper[4778]: E0312 15:09:59.255410 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.155326 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555470-65l68"] Mar 12 15:10:00 crc kubenswrapper[4778]: E0312 15:10:00.156081 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2baa0b-6680-41af-8231-e30368cb0090" containerName="gather" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.156103 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2baa0b-6680-41af-8231-e30368cb0090" containerName="gather" Mar 12 15:10:00 crc kubenswrapper[4778]: E0312 15:10:00.156128 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2baa0b-6680-41af-8231-e30368cb0090" containerName="copy" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.156134 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2baa0b-6680-41af-8231-e30368cb0090" containerName="copy" Mar 12 15:10:00 crc kubenswrapper[4778]: E0312 15:10:00.156147 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerName="extract-content" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.156154 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerName="extract-content" Mar 12 15:10:00 crc kubenswrapper[4778]: E0312 15:10:00.156164 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerName="extract-utilities" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.156171 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerName="extract-utilities" Mar 12 15:10:00 crc kubenswrapper[4778]: E0312 15:10:00.156197 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerName="registry-server" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.156203 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerName="registry-server" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.156406 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2baa0b-6680-41af-8231-e30368cb0090" containerName="gather" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.156417 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2baa0b-6680-41af-8231-e30368cb0090" containerName="copy" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.156432 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="193a1938-83bf-48dc-abd9-ecc2f202db8d" containerName="registry-server" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.157042 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555470-65l68" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.160525 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.160880 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.161450 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.171487 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555470-65l68"] Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.270253 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4lng\" (UniqueName: \"kubernetes.io/projected/f05c65b6-74a0-49ef-8f84-3b4453313dc7-kube-api-access-b4lng\") pod \"auto-csr-approver-29555470-65l68\" (UID: \"f05c65b6-74a0-49ef-8f84-3b4453313dc7\") " pod="openshift-infra/auto-csr-approver-29555470-65l68" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.372483 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4lng\" (UniqueName: \"kubernetes.io/projected/f05c65b6-74a0-49ef-8f84-3b4453313dc7-kube-api-access-b4lng\") pod \"auto-csr-approver-29555470-65l68\" (UID: \"f05c65b6-74a0-49ef-8f84-3b4453313dc7\") " pod="openshift-infra/auto-csr-approver-29555470-65l68" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.391784 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4lng\" (UniqueName: \"kubernetes.io/projected/f05c65b6-74a0-49ef-8f84-3b4453313dc7-kube-api-access-b4lng\") pod \"auto-csr-approver-29555470-65l68\" (UID: \"f05c65b6-74a0-49ef-8f84-3b4453313dc7\") " pod="openshift-infra/auto-csr-approver-29555470-65l68" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.487585 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555470-65l68" Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.936714 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555470-65l68"] Mar 12 15:10:00 crc kubenswrapper[4778]: I0312 15:10:00.940381 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:10:01 crc kubenswrapper[4778]: I0312 15:10:01.283647 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555470-65l68" event={"ID":"f05c65b6-74a0-49ef-8f84-3b4453313dc7","Type":"ContainerStarted","Data":"6da4a4d0c09739dff2ab3fa082e65fb673da228e6d94f3fd22b17604e2105a4a"} Mar 12 15:10:03 crc kubenswrapper[4778]: I0312 15:10:03.306954 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555470-65l68" event={"ID":"f05c65b6-74a0-49ef-8f84-3b4453313dc7","Type":"ContainerStarted","Data":"66d173277dbb8cde37f4f992e677953055661368f74064cf032011267c61214c"} Mar 12 15:10:03 crc kubenswrapper[4778]: I0312 15:10:03.326375 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555470-65l68" podStartSLOduration=1.6866532969999999 podStartE2EDuration="3.326353807s" podCreationTimestamp="2026-03-12 15:10:00 +0000 UTC" firstStartedPulling="2026-03-12 15:10:00.940100812 +0000 UTC m=+7219.388796208" lastFinishedPulling="2026-03-12 15:10:02.579801312 +0000 UTC m=+7221.028496718" observedRunningTime="2026-03-12 15:10:03.320698936 +0000 UTC m=+7221.769394342" watchObservedRunningTime="2026-03-12 15:10:03.326353807 +0000 UTC m=+7221.775049203" Mar 12 15:10:04 crc kubenswrapper[4778]: I0312 15:10:04.316405 4778 generic.go:334] "Generic (PLEG): container finished" podID="f05c65b6-74a0-49ef-8f84-3b4453313dc7" containerID="66d173277dbb8cde37f4f992e677953055661368f74064cf032011267c61214c" exitCode=0 Mar 12 15:10:04 crc kubenswrapper[4778]: I0312 15:10:04.316716 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555470-65l68" event={"ID":"f05c65b6-74a0-49ef-8f84-3b4453313dc7","Type":"ContainerDied","Data":"66d173277dbb8cde37f4f992e677953055661368f74064cf032011267c61214c"} Mar 12 15:10:05 crc kubenswrapper[4778]: I0312 15:10:05.705894 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555470-65l68" Mar 12 15:10:05 crc kubenswrapper[4778]: I0312 15:10:05.774569 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4lng\" (UniqueName: \"kubernetes.io/projected/f05c65b6-74a0-49ef-8f84-3b4453313dc7-kube-api-access-b4lng\") pod \"f05c65b6-74a0-49ef-8f84-3b4453313dc7\" (UID: \"f05c65b6-74a0-49ef-8f84-3b4453313dc7\") " Mar 12 15:10:05 crc kubenswrapper[4778]: I0312 15:10:05.782531 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f05c65b6-74a0-49ef-8f84-3b4453313dc7-kube-api-access-b4lng" (OuterVolumeSpecName: "kube-api-access-b4lng") pod "f05c65b6-74a0-49ef-8f84-3b4453313dc7" (UID: "f05c65b6-74a0-49ef-8f84-3b4453313dc7"). InnerVolumeSpecName "kube-api-access-b4lng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:10:05 crc kubenswrapper[4778]: I0312 15:10:05.877672 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4lng\" (UniqueName: \"kubernetes.io/projected/f05c65b6-74a0-49ef-8f84-3b4453313dc7-kube-api-access-b4lng\") on node \"crc\" DevicePath \"\"" Mar 12 15:10:06 crc kubenswrapper[4778]: I0312 15:10:06.338535 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555470-65l68" event={"ID":"f05c65b6-74a0-49ef-8f84-3b4453313dc7","Type":"ContainerDied","Data":"6da4a4d0c09739dff2ab3fa082e65fb673da228e6d94f3fd22b17604e2105a4a"} Mar 12 15:10:06 crc kubenswrapper[4778]: I0312 15:10:06.338950 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da4a4d0c09739dff2ab3fa082e65fb673da228e6d94f3fd22b17604e2105a4a" Mar 12 15:10:06 crc kubenswrapper[4778]: I0312 15:10:06.338781 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555470-65l68" Mar 12 15:10:06 crc kubenswrapper[4778]: I0312 15:10:06.409729 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555464-rt6fz"] Mar 12 15:10:06 crc kubenswrapper[4778]: I0312 15:10:06.417672 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555464-rt6fz"] Mar 12 15:10:08 crc kubenswrapper[4778]: I0312 15:10:08.264362 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc35b0a-1b16-4db8-adef-8a6afd6ae934" path="/var/lib/kubelet/pods/bdc35b0a-1b16-4db8-adef-8a6afd6ae934/volumes" Mar 12 15:10:11 crc kubenswrapper[4778]: I0312 15:10:11.254116 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:10:11 crc kubenswrapper[4778]: E0312 15:10:11.255902 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:10:25 crc kubenswrapper[4778]: I0312 15:10:25.258627 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:10:25 crc kubenswrapper[4778]: E0312 15:10:25.259871 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:10:26 crc kubenswrapper[4778]: I0312 15:10:26.270121 4778 scope.go:117] "RemoveContainer" containerID="13a5daaa89f4db1da5ea953e47f7efe223f79130e65c2950cad2f908b81e7d1b" Mar 12 15:10:37 crc kubenswrapper[4778]: I0312 15:10:37.254162 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:10:37 crc kubenswrapper[4778]: E0312 15:10:37.255099 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:10:48 crc kubenswrapper[4778]: I0312 15:10:48.254305 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:10:48 crc kubenswrapper[4778]: E0312 15:10:48.255126 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:11:01 crc kubenswrapper[4778]: I0312 15:11:01.255092 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:11:01 crc kubenswrapper[4778]: E0312 15:11:01.256149 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:11:16 crc kubenswrapper[4778]: I0312 15:11:16.254342 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:11:16 crc kubenswrapper[4778]: E0312 15:11:16.255245 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:11:31 crc kubenswrapper[4778]: I0312 15:11:31.254888 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:11:31 crc kubenswrapper[4778]: E0312 15:11:31.256170 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:11:46 crc kubenswrapper[4778]: I0312 15:11:46.253461 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:11:46 crc kubenswrapper[4778]: E0312 15:11:46.254269 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:11:58 crc kubenswrapper[4778]: I0312 15:11:58.253988 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:11:58 crc kubenswrapper[4778]: E0312 15:11:58.254821 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.169251 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555472-k4f9h"] Mar 12 15:12:00 crc kubenswrapper[4778]: E0312 15:12:00.170117 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05c65b6-74a0-49ef-8f84-3b4453313dc7" containerName="oc" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.170132 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05c65b6-74a0-49ef-8f84-3b4453313dc7" containerName="oc" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.170396 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05c65b6-74a0-49ef-8f84-3b4453313dc7" containerName="oc" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.171261 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.174117 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.174561 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.175140 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.189341 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555472-k4f9h"] Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.344049 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5mxz\" (UniqueName: \"kubernetes.io/projected/30c4e913-d163-4764-8738-ac336cd93df9-kube-api-access-x5mxz\") pod \"auto-csr-approver-29555472-k4f9h\" (UID: \"30c4e913-d163-4764-8738-ac336cd93df9\") " pod="openshift-infra/auto-csr-approver-29555472-k4f9h" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.446161 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5mxz\" (UniqueName: \"kubernetes.io/projected/30c4e913-d163-4764-8738-ac336cd93df9-kube-api-access-x5mxz\") pod \"auto-csr-approver-29555472-k4f9h\" (UID: \"30c4e913-d163-4764-8738-ac336cd93df9\") " pod="openshift-infra/auto-csr-approver-29555472-k4f9h" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.465498 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5mxz\" (UniqueName: \"kubernetes.io/projected/30c4e913-d163-4764-8738-ac336cd93df9-kube-api-access-x5mxz\") pod \"auto-csr-approver-29555472-k4f9h\" (UID: \"30c4e913-d163-4764-8738-ac336cd93df9\") " pod="openshift-infra/auto-csr-approver-29555472-k4f9h" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.502895 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" Mar 12 15:12:00 crc kubenswrapper[4778]: I0312 15:12:00.965958 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555472-k4f9h"] Mar 12 15:12:00 crc kubenswrapper[4778]: W0312 15:12:00.966899 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30c4e913_d163_4764_8738_ac336cd93df9.slice/crio-24ce58de7517e21c1f2011e5b11909f2e95fc03f93ab0a4c71d72ac7864c9440 WatchSource:0}: Error finding container 24ce58de7517e21c1f2011e5b11909f2e95fc03f93ab0a4c71d72ac7864c9440: Status 404 returned error can't find the container with id 24ce58de7517e21c1f2011e5b11909f2e95fc03f93ab0a4c71d72ac7864c9440 Mar 12 15:12:01 crc kubenswrapper[4778]: I0312 15:12:01.595318 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" event={"ID":"30c4e913-d163-4764-8738-ac336cd93df9","Type":"ContainerStarted","Data":"24ce58de7517e21c1f2011e5b11909f2e95fc03f93ab0a4c71d72ac7864c9440"} Mar 12 15:12:02 crc kubenswrapper[4778]: I0312 15:12:02.603425 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" event={"ID":"30c4e913-d163-4764-8738-ac336cd93df9","Type":"ContainerStarted","Data":"971c448e63690dd43ac1d65335a70f73b2547d4337b42531c9336354c82b33f3"} Mar 12 15:12:02 crc kubenswrapper[4778]: I0312 15:12:02.622195 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" podStartSLOduration=1.560282964 podStartE2EDuration="2.622156819s" podCreationTimestamp="2026-03-12 15:12:00 +0000 UTC" firstStartedPulling="2026-03-12 15:12:00.969292154 +0000 UTC m=+7339.417987570" lastFinishedPulling="2026-03-12 15:12:02.031166029 +0000 UTC m=+7340.479861425" observedRunningTime="2026-03-12 15:12:02.614687857 +0000 UTC m=+7341.063383253" watchObservedRunningTime="2026-03-12 15:12:02.622156819 +0000 UTC m=+7341.070852205" Mar 12 15:12:03 crc kubenswrapper[4778]: I0312 15:12:03.614619 4778 generic.go:334] "Generic (PLEG): container finished" podID="30c4e913-d163-4764-8738-ac336cd93df9" containerID="971c448e63690dd43ac1d65335a70f73b2547d4337b42531c9336354c82b33f3" exitCode=0 Mar 12 15:12:03 crc kubenswrapper[4778]: I0312 15:12:03.614659 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" event={"ID":"30c4e913-d163-4764-8738-ac336cd93df9","Type":"ContainerDied","Data":"971c448e63690dd43ac1d65335a70f73b2547d4337b42531c9336354c82b33f3"} Mar 12 15:12:04 crc kubenswrapper[4778]: I0312 15:12:04.981630 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" Mar 12 15:12:05 crc kubenswrapper[4778]: I0312 15:12:05.065045 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5mxz\" (UniqueName: \"kubernetes.io/projected/30c4e913-d163-4764-8738-ac336cd93df9-kube-api-access-x5mxz\") pod \"30c4e913-d163-4764-8738-ac336cd93df9\" (UID: \"30c4e913-d163-4764-8738-ac336cd93df9\") " Mar 12 15:12:05 crc kubenswrapper[4778]: I0312 15:12:05.071596 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c4e913-d163-4764-8738-ac336cd93df9-kube-api-access-x5mxz" (OuterVolumeSpecName: "kube-api-access-x5mxz") pod "30c4e913-d163-4764-8738-ac336cd93df9" (UID: "30c4e913-d163-4764-8738-ac336cd93df9"). InnerVolumeSpecName "kube-api-access-x5mxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:12:05 crc kubenswrapper[4778]: I0312 15:12:05.167003 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5mxz\" (UniqueName: \"kubernetes.io/projected/30c4e913-d163-4764-8738-ac336cd93df9-kube-api-access-x5mxz\") on node \"crc\" DevicePath \"\"" Mar 12 15:12:05 crc kubenswrapper[4778]: I0312 15:12:05.369002 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7wlhj"] Mar 12 15:12:05 crc kubenswrapper[4778]: I0312 15:12:05.376929 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555466-7wlhj"] Mar 12 15:12:05 crc kubenswrapper[4778]: I0312 15:12:05.645667 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" event={"ID":"30c4e913-d163-4764-8738-ac336cd93df9","Type":"ContainerDied","Data":"24ce58de7517e21c1f2011e5b11909f2e95fc03f93ab0a4c71d72ac7864c9440"} Mar 12 15:12:05 crc kubenswrapper[4778]: I0312 15:12:05.645726 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ce58de7517e21c1f2011e5b11909f2e95fc03f93ab0a4c71d72ac7864c9440" Mar 12 15:12:05 crc kubenswrapper[4778]: I0312 15:12:05.645743 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555472-k4f9h" Mar 12 15:12:06 crc kubenswrapper[4778]: I0312 15:12:06.268495 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58e0f99-4ece-49ec-9c47-b82055df7d48" path="/var/lib/kubelet/pods/c58e0f99-4ece-49ec-9c47-b82055df7d48/volumes" Mar 12 15:12:12 crc kubenswrapper[4778]: I0312 15:12:12.261281 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:12:12 crc kubenswrapper[4778]: E0312 15:12:12.262090 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:12:24 crc kubenswrapper[4778]: I0312 15:12:24.611059 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86cb765474-5pq5z_6bd172c5-383f-4273-98a5-2c92223dc765/barbican-api-log/0.log" Mar 12 15:12:25 crc kubenswrapper[4778]: I0312 15:12:25.448114 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65c9994dfd-xznqh_8ee1f546-8428-4b23-93e4-b8370fd4224b/barbican-keystone-listener-log/0.log" Mar 12 15:12:25 crc kubenswrapper[4778]: I0312 15:12:25.952898 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dcf9787-ngc87_d505bb59-3c9e-4cfa-891c-c8e0068e2567/barbican-worker-log/0.log" Mar 12 15:12:26 crc kubenswrapper[4778]: I0312 15:12:26.365449 4778 scope.go:117] "RemoveContainer" containerID="8984f879d02ecea61666d68e6857174dd681c26238a5c27d5a617cc7dccda3db" Mar 12 15:12:26 crc kubenswrapper[4778]: I0312 15:12:26.527899 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx_b99627a8-43d8-4f7d-90f7-530eda3c2213/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:27 crc kubenswrapper[4778]: I0312 15:12:27.077328 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/ceilometer-central-agent/0.log" Mar 12 15:12:27 crc kubenswrapper[4778]: I0312 15:12:27.254456 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:12:27 crc kubenswrapper[4778]: E0312 15:12:27.254681 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:12:27 crc kubenswrapper[4778]: I0312 15:12:27.598703 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_99f72014-50e8-4dd4-9764-1b2c7d546b30/cinder-api-log/0.log" Mar 12 15:12:28 crc kubenswrapper[4778]: I0312 15:12:28.116448 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_39ee2404-53a8-4598-8c4b-c3a34fbf3480/cinder-scheduler/0.log" Mar 12 15:12:28 crc kubenswrapper[4778]: I0312 15:12:28.613323 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4szjl_5c5541f3-fb44-476b-91c2-b07dffe50894/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:29 crc kubenswrapper[4778]: I0312 15:12:29.108687 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6_36bb4acd-fab3-4998-a8cd-a6ebcc800fc8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:30 crc kubenswrapper[4778]: I0312 15:12:30.212543 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f89cfcd7f-vk6h4_46f34397-57fe-425d-b69d-040f4384ac69/dnsmasq-dns/0.log" Mar 12 15:12:30 crc kubenswrapper[4778]: I0312 15:12:30.767178 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2xksx_96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:31 crc kubenswrapper[4778]: I0312 15:12:31.275339 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_81c1a05c-5642-43d4-8a7b-229330168332/glance-log/0.log" Mar 12 15:12:31 crc kubenswrapper[4778]: I0312 15:12:31.792408 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa757af-1c91-4b93-8916-5bbd99b8522e/glance-log/0.log" Mar 12 15:12:32 crc kubenswrapper[4778]: I0312 15:12:32.332406 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bngcx_f69e6cfe-f7c2-4127-b4df-710725c52227/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:32 crc kubenswrapper[4778]: I0312 15:12:32.808649 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g252n_29f8609b-4a3b-42ba-9450-a2b633bb4c2c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:34 crc kubenswrapper[4778]: I0312 15:12:34.625971 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b6dc4885-6lrlq_a56bb599-f10d-4564-b6bf-48128dc2c7f1/keystone-api/0.log" Mar 12 15:12:36 crc kubenswrapper[4778]: I0312 15:12:36.502206 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b6dc4885-z4h9m_16dea17b-eaa4-4bbf-8895-c077b3e28d66/keystone-api/0.log" Mar 12 15:12:36 crc kubenswrapper[4778]: I0312 15:12:36.961492 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555401-vjgkl_e4df6927-3452-4b36-b59a-a1fdcd4272a4/keystone-cron/0.log" Mar 12 15:12:37 crc kubenswrapper[4778]: I0312 15:12:37.402972 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555461-lmqk9_ebdf3274-70cb-4083-bf12-5d1038a9b7ba/keystone-cron/0.log" Mar 12 15:12:37 crc kubenswrapper[4778]: I0312 15:12:37.899456 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_51f24fcd-aff5-4785-abf7-4936180cee78/kube-state-metrics/0.log" Mar 12 15:12:38 crc kubenswrapper[4778]: I0312 15:12:38.501424 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8_8713b951-b516-42bd-9286-4343e5bcc955/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:39 crc kubenswrapper[4778]: I0312 15:12:39.253865 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:12:39 crc kubenswrapper[4778]: E0312 15:12:39.254476 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:12:39 crc kubenswrapper[4778]: I0312 15:12:39.863764 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ec63cc68-6fde-419b-973c-91fc982e6a49/memcached/0.log" Mar 12 15:12:44 crc kubenswrapper[4778]: I0312 15:12:44.227049 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-dggmh_7596a69e-33c9-4a2b-89fc-e4c41252b3fd/neutron-api/0.log" Mar 12 15:12:47 crc kubenswrapper[4778]: I0312 15:12:47.955498 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-zx97x_8a67d4b7-d8eb-40f4-b51d-62e92c6042c1/neutron-api/0.log" Mar 12 15:12:48 crc kubenswrapper[4778]: I0312 15:12:48.431361 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg_5cc410de-5b42-44d1-8b29-37161475730e/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:50 crc kubenswrapper[4778]: I0312 15:12:50.434000 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4/nova-api-log/0.log" Mar 12 15:12:52 crc kubenswrapper[4778]: I0312 15:12:52.256018 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-1_f0341d80-4327-4c9e-bc11-0cddbc6eab66/nova-api-log/0.log" Mar 12 15:12:53 crc kubenswrapper[4778]: I0312 15:12:53.262681 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_929bb450-949d-4f4f-9c21-de6c3fe32927/nova-cell0-conductor-conductor/0.log" Mar 12 15:12:53 crc kubenswrapper[4778]: I0312 15:12:53.898890 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1466aea3-fa10-49a6-a254-a96a52091aca/nova-cell1-conductor-conductor/0.log" Mar 12 15:12:54 crc kubenswrapper[4778]: I0312 15:12:54.255108 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:12:54 crc kubenswrapper[4778]: E0312 15:12:54.255736 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:12:54 crc kubenswrapper[4778]: I0312 15:12:54.500967 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-metadata-0_c289a520-78eb-433f-b7a4-0c03be917c18/nova-cell1-metadata-log/0.log" Mar 12 15:12:55 crc kubenswrapper[4778]: I0312 15:12:55.083228 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 15:12:55 crc kubenswrapper[4778]: I0312 15:12:55.683730 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5tw6s_6ed77f87-e6b2-4c7a-8b0e-003106200dc8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:12:57 crc kubenswrapper[4778]: I0312 15:12:57.229437 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f613745b-fe33-4918-9e0a-da2a59c55e33/nova-scheduler-scheduler/0.log" Mar 12 15:12:57 crc kubenswrapper[4778]: I0312 15:12:57.747139 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fe52f8ba-9053-4733-b2e3-8f1becf437c8/galera/0.log" Mar 12 15:12:58 crc kubenswrapper[4778]: I0312 15:12:58.216427 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_663feb48-0ed1-4947-97c3-e0bac206fdb2/galera/0.log" Mar 12 15:12:58 crc kubenswrapper[4778]: I0312 15:12:58.670368 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_856cd6d1-db21-4503-94d7-cbf27ca96cc2/openstackclient/0.log" Mar 12 15:12:59 crc kubenswrapper[4778]: I0312 15:12:59.155449 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4wct6_3b8efd1e-884d-4963-b69f-04ede0a92267/ovn-controller/0.log" Mar 12 15:12:59 crc kubenswrapper[4778]: I0312 15:12:59.563865 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vtt4z_a8484e5d-6f77-407c-81db-0d9b2a6b37fd/openstack-network-exporter/0.log" Mar 12 15:13:00 crc kubenswrapper[4778]: I0312 15:13:00.041281 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovsdb-server/0.log" Mar 12 15:13:00 crc kubenswrapper[4778]: I0312 15:13:00.601032 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9lbdq_3c0a2200-506d-4ac3-b08c-9b3156c9e573/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:13:01 crc kubenswrapper[4778]: I0312 15:13:01.548222 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b25f9c9-784a-4a52-9bb3-02c6c4592702/ovn-northd/0.log" Mar 12 15:13:02 crc kubenswrapper[4778]: I0312 15:13:02.077586 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7321e15e-673c-4e0d-80f8-6ac644c1940f/ovsdbserver-nb/0.log" Mar 12 15:13:02 crc kubenswrapper[4778]: I0312 15:13:02.591575 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c951c6f-06fd-4793-a95b-26b5c1400d73/ovsdbserver-sb/0.log" Mar 12 15:13:03 crc kubenswrapper[4778]: I0312 15:13:03.613588 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4d765698-l7bjx_267e7df2-d35c-45c4-af65-e8af31f8f6cf/placement-log/0.log" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.270686 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03/rabbitmq/0.log" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.774920 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7cktf"] Mar 12 15:13:04 crc kubenswrapper[4778]: E0312 15:13:04.775339 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c4e913-d163-4764-8738-ac336cd93df9" containerName="oc" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.775351 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c4e913-d163-4764-8738-ac336cd93df9" containerName="oc" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.775561 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c4e913-d163-4764-8738-ac336cd93df9" containerName="oc" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.777154 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.869945 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cktf"] Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.889342 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-catalog-content\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.889433 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzsf2\" (UniqueName: \"kubernetes.io/projected/690208cb-cbdb-488c-9998-70cf01f1cc05-kube-api-access-dzsf2\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.889469 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-utilities\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.911737 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e89dfcc-2ac3-444c-91e8-56991eae096b/rabbitmq/0.log" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.991239 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-catalog-content\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.991296 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzsf2\" (UniqueName: \"kubernetes.io/projected/690208cb-cbdb-488c-9998-70cf01f1cc05-kube-api-access-dzsf2\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.991317 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-utilities\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.991776 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-utilities\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:04 crc kubenswrapper[4778]: I0312 15:13:04.992018 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-catalog-content\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:05 crc kubenswrapper[4778]: I0312 15:13:05.011803 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzsf2\" (UniqueName: \"kubernetes.io/projected/690208cb-cbdb-488c-9998-70cf01f1cc05-kube-api-access-dzsf2\") pod \"community-operators-7cktf\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:05 crc kubenswrapper[4778]: I0312 15:13:05.159656 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:05 crc kubenswrapper[4778]: I0312 15:13:05.449406 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc_43a3ffe4-8b64-4e26-b63a-5254a986e4a4/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:13:05 crc kubenswrapper[4778]: I0312 15:13:05.702748 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cktf"] Mar 12 15:13:05 crc kubenswrapper[4778]: W0312 15:13:05.716543 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod690208cb_cbdb_488c_9998_70cf01f1cc05.slice/crio-b4c4d370a37ff718b518066aa455a41a074b334ef4b1257119931ced115773bd WatchSource:0}: Error finding container b4c4d370a37ff718b518066aa455a41a074b334ef4b1257119931ced115773bd: Status 404 returned error can't find the container with id b4c4d370a37ff718b518066aa455a41a074b334ef4b1257119931ced115773bd Mar 12 15:13:05 crc kubenswrapper[4778]: I0312 15:13:05.886212 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc_bd7ac6b4-5600-45ce-b0ea-199dd4baefcb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:13:06 crc kubenswrapper[4778]: I0312 15:13:06.253811 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:13:06 crc kubenswrapper[4778]: E0312 15:13:06.254414 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:13:06 crc kubenswrapper[4778]: I0312 15:13:06.272903 4778 generic.go:334] "Generic (PLEG): container finished" podID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerID="e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7" exitCode=0 Mar 12 15:13:06 crc kubenswrapper[4778]: I0312 15:13:06.272952 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cktf" event={"ID":"690208cb-cbdb-488c-9998-70cf01f1cc05","Type":"ContainerDied","Data":"e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7"} Mar 12 15:13:06 crc kubenswrapper[4778]: I0312 15:13:06.272982 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cktf" event={"ID":"690208cb-cbdb-488c-9998-70cf01f1cc05","Type":"ContainerStarted","Data":"b4c4d370a37ff718b518066aa455a41a074b334ef4b1257119931ced115773bd"} Mar 12 15:13:06 crc kubenswrapper[4778]: I0312 15:13:06.333314 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gt58t_b0bb06df-44bb-4939-9492-a6ad3d6b5368/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:13:06 crc kubenswrapper[4778]: I0312 15:13:06.905130 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8mmjm_c993b33e-6c36-4524-864a-65da461a8e0c/ssh-known-hosts-edpm-deployment/0.log" Mar 12 15:13:07 crc kubenswrapper[4778]: I0312 15:13:07.284837 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cktf" event={"ID":"690208cb-cbdb-488c-9998-70cf01f1cc05","Type":"ContainerStarted","Data":"07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca"} Mar 12 15:13:07 crc kubenswrapper[4778]: I0312 15:13:07.744403 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f887c49f-fw2qd_bbd76cb8-462f-4e60-b755-ef3170e70d11/proxy-httpd/0.log" Mar 12 15:13:08 crc kubenswrapper[4778]: I0312 15:13:08.283959 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5knbg_2edc2c90-f91e-402d-809c-514e9d8a5e04/swift-ring-rebalance/0.log" Mar 12 15:13:08 crc kubenswrapper[4778]: I0312 15:13:08.297456 4778 generic.go:334] "Generic (PLEG): container finished" podID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerID="07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca" exitCode=0 Mar 12 15:13:08 crc kubenswrapper[4778]: I0312 15:13:08.297519 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cktf" event={"ID":"690208cb-cbdb-488c-9998-70cf01f1cc05","Type":"ContainerDied","Data":"07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca"} Mar 12 15:13:08 crc kubenswrapper[4778]: I0312 15:13:08.861294 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-server/0.log" Mar 12 15:13:09 crc kubenswrapper[4778]: I0312 15:13:09.308746 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cktf" event={"ID":"690208cb-cbdb-488c-9998-70cf01f1cc05","Type":"ContainerStarted","Data":"cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f"} Mar 12 15:13:09 crc kubenswrapper[4778]: I0312 15:13:09.338871 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7cktf" podStartSLOduration=2.849708025 podStartE2EDuration="5.338849147s" podCreationTimestamp="2026-03-12 15:13:04 +0000 UTC" firstStartedPulling="2026-03-12 15:13:06.274510024 +0000 UTC m=+7404.723205420" lastFinishedPulling="2026-03-12 15:13:08.763651146 +0000 UTC m=+7407.212346542" observedRunningTime="2026-03-12 15:13:09.326806405 +0000 UTC m=+7407.775501801" watchObservedRunningTime="2026-03-12 15:13:09.338849147 +0000 UTC m=+7407.787544543" Mar 12 15:13:09 crc kubenswrapper[4778]: I0312 15:13:09.404756 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s_2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:13:09 crc kubenswrapper[4778]: I0312 15:13:09.892545 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_74897d0a-ca7b-4589-bd4c-75910c2d491c/tempest-tests-tempest-tests-runner/0.log" Mar 12 15:13:10 crc kubenswrapper[4778]: I0312 15:13:10.322510 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_82246f69-2112-44e9-a783-a4a5926188b4/test-operator-logs-container/0.log" Mar 12 15:13:10 crc kubenswrapper[4778]: I0312 15:13:10.801378 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9glvr_41583476-38cd-4c0d-a05a-96ddc5b330ca/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.154147 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrpx"] Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.157208 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.194125 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrpx"] Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.235947 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrzgn\" (UniqueName: \"kubernetes.io/projected/8549fb56-bc75-4c66-8900-ba62a687ce0e-kube-api-access-nrzgn\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.236003 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-catalog-content\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.236349 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-utilities\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.338715 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzgn\" (UniqueName: \"kubernetes.io/projected/8549fb56-bc75-4c66-8900-ba62a687ce0e-kube-api-access-nrzgn\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.339132 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-catalog-content\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.339555 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-utilities\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.339874 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-catalog-content\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.340010 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-utilities\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.369389 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzgn\" (UniqueName: \"kubernetes.io/projected/8549fb56-bc75-4c66-8900-ba62a687ce0e-kube-api-access-nrzgn\") pod \"redhat-marketplace-fdrpx\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:12 crc kubenswrapper[4778]: I0312 15:13:12.483122 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:13 crc kubenswrapper[4778]: I0312 15:13:13.059716 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrpx"] Mar 12 15:13:13 crc kubenswrapper[4778]: I0312 15:13:13.346888 4778 generic.go:334] "Generic (PLEG): container finished" podID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerID="9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8" exitCode=0 Mar 12 15:13:13 crc kubenswrapper[4778]: I0312 15:13:13.346953 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrpx" event={"ID":"8549fb56-bc75-4c66-8900-ba62a687ce0e","Type":"ContainerDied","Data":"9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8"} Mar 12 15:13:13 crc kubenswrapper[4778]: I0312 15:13:13.347016 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrpx" event={"ID":"8549fb56-bc75-4c66-8900-ba62a687ce0e","Type":"ContainerStarted","Data":"74af807183905ac63573678f5f2eb3366ccf037a22980bf197b990c23186ec82"} Mar 12 15:13:14 crc kubenswrapper[4778]: I0312 15:13:14.360368 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrpx" event={"ID":"8549fb56-bc75-4c66-8900-ba62a687ce0e","Type":"ContainerStarted","Data":"808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762"} Mar 12 15:13:15 crc kubenswrapper[4778]: I0312 15:13:15.160816 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:15 crc kubenswrapper[4778]: I0312 15:13:15.161255 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:15 crc kubenswrapper[4778]: I0312 15:13:15.206128 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:15 crc kubenswrapper[4778]: I0312 15:13:15.374111 4778 generic.go:334] "Generic (PLEG): container finished" podID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerID="808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762" exitCode=0 Mar 12 15:13:15 crc kubenswrapper[4778]: I0312 15:13:15.374219 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrpx" event={"ID":"8549fb56-bc75-4c66-8900-ba62a687ce0e","Type":"ContainerDied","Data":"808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762"} Mar 12 15:13:15 crc kubenswrapper[4778]: I0312 15:13:15.424158 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:16 crc kubenswrapper[4778]: I0312 15:13:16.383963 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrpx" event={"ID":"8549fb56-bc75-4c66-8900-ba62a687ce0e","Type":"ContainerStarted","Data":"fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3"} Mar 12 15:13:16 crc kubenswrapper[4778]: I0312 15:13:16.407301 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fdrpx" podStartSLOduration=1.712659197 podStartE2EDuration="4.407282373s" podCreationTimestamp="2026-03-12 15:13:12 +0000 UTC" firstStartedPulling="2026-03-12 15:13:13.349355223 +0000 UTC m=+7411.798050619" lastFinishedPulling="2026-03-12 15:13:16.043978389 +0000 UTC m=+7414.492673795" observedRunningTime="2026-03-12 15:13:16.401916111 +0000 UTC m=+7414.850611517" watchObservedRunningTime="2026-03-12 15:13:16.407282373 +0000 UTC m=+7414.855977769" Mar 12 15:13:17 crc kubenswrapper[4778]: I0312 15:13:17.538055 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cktf"] Mar 12 15:13:17 crc kubenswrapper[4778]: I0312 15:13:17.539123 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7cktf" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerName="registry-server" containerID="cri-o://cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f" gracePeriod=2 Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.034686 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.096213 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzsf2\" (UniqueName: \"kubernetes.io/projected/690208cb-cbdb-488c-9998-70cf01f1cc05-kube-api-access-dzsf2\") pod \"690208cb-cbdb-488c-9998-70cf01f1cc05\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.096301 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-utilities\") pod \"690208cb-cbdb-488c-9998-70cf01f1cc05\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.096510 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-catalog-content\") pod \"690208cb-cbdb-488c-9998-70cf01f1cc05\" (UID: \"690208cb-cbdb-488c-9998-70cf01f1cc05\") " Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.098010 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-utilities" (OuterVolumeSpecName: "utilities") pod "690208cb-cbdb-488c-9998-70cf01f1cc05" (UID: "690208cb-cbdb-488c-9998-70cf01f1cc05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.119356 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690208cb-cbdb-488c-9998-70cf01f1cc05-kube-api-access-dzsf2" (OuterVolumeSpecName: "kube-api-access-dzsf2") pod "690208cb-cbdb-488c-9998-70cf01f1cc05" (UID: "690208cb-cbdb-488c-9998-70cf01f1cc05"). InnerVolumeSpecName "kube-api-access-dzsf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.171245 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "690208cb-cbdb-488c-9998-70cf01f1cc05" (UID: "690208cb-cbdb-488c-9998-70cf01f1cc05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.199485 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzsf2\" (UniqueName: \"kubernetes.io/projected/690208cb-cbdb-488c-9998-70cf01f1cc05-kube-api-access-dzsf2\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.199531 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.199545 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690208cb-cbdb-488c-9998-70cf01f1cc05-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.403656 4778 generic.go:334] "Generic (PLEG): container finished" podID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerID="cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f" exitCode=0 Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.403702 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cktf" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.403709 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cktf" event={"ID":"690208cb-cbdb-488c-9998-70cf01f1cc05","Type":"ContainerDied","Data":"cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f"} Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.403741 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cktf" event={"ID":"690208cb-cbdb-488c-9998-70cf01f1cc05","Type":"ContainerDied","Data":"b4c4d370a37ff718b518066aa455a41a074b334ef4b1257119931ced115773bd"} Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.403762 4778 scope.go:117] "RemoveContainer" containerID="cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.431606 4778 scope.go:117] "RemoveContainer" containerID="07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.434131 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cktf"] Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.445917 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7cktf"] Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.459054 4778 scope.go:117] "RemoveContainer" containerID="e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.521099 4778 scope.go:117] "RemoveContainer" containerID="cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f" Mar 12 15:13:18 crc kubenswrapper[4778]: E0312 15:13:18.522118 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f\": container with ID starting with cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f not found: ID does not exist" containerID="cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.522168 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f"} err="failed to get container status \"cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f\": rpc error: code = NotFound desc = could not find container \"cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f\": container with ID starting with cf84feed15251158f0773744cf0837c36cc843b6cf86c92e072869bcc5f3a48f not found: ID does not exist" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.522220 4778 scope.go:117] "RemoveContainer" containerID="07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca" Mar 12 15:13:18 crc kubenswrapper[4778]: E0312 15:13:18.522626 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca\": container with ID starting with 07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca not found: ID does not exist" containerID="07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.522664 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca"} err="failed to get container status \"07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca\": rpc error: code = NotFound desc = could not find container \"07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca\": container with ID starting with 07e287f5f2855697f874a35c1d2ba69a376cb020469f9688118410de38eee4ca not found: ID does not exist" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.522688 4778 scope.go:117] "RemoveContainer" containerID="e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7" Mar 12 15:13:18 crc kubenswrapper[4778]: E0312 15:13:18.523057 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7\": container with ID starting with e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7 not found: ID does not exist" containerID="e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7" Mar 12 15:13:18 crc kubenswrapper[4778]: I0312 15:13:18.523085 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7"} err="failed to get container status \"e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7\": rpc error: code = NotFound desc = could not find container \"e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7\": container with ID starting with e77193c08ece5fcdfb900c49597028aa2273d0551c0b231ebb63709f63e11ad7 not found: ID does not exist" Mar 12 15:13:20 crc kubenswrapper[4778]: I0312 15:13:20.270147 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" path="/var/lib/kubelet/pods/690208cb-cbdb-488c-9998-70cf01f1cc05/volumes" Mar 12 15:13:21 crc kubenswrapper[4778]: I0312 15:13:21.254964 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:13:21 crc kubenswrapper[4778]: E0312 15:13:21.255497 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:13:22 crc kubenswrapper[4778]: I0312 15:13:22.484017 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:22 crc kubenswrapper[4778]: I0312 15:13:22.484497 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:22 crc kubenswrapper[4778]: I0312 15:13:22.544596 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:23 crc kubenswrapper[4778]: I0312 15:13:23.541893 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:23 crc kubenswrapper[4778]: I0312 15:13:23.594497 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrpx"] Mar 12 15:13:25 crc kubenswrapper[4778]: I0312 15:13:25.514275 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fdrpx" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerName="registry-server" containerID="cri-o://fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3" gracePeriod=2 Mar 12 15:13:25 crc kubenswrapper[4778]: I0312 15:13:25.984171 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.174353 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrzgn\" (UniqueName: \"kubernetes.io/projected/8549fb56-bc75-4c66-8900-ba62a687ce0e-kube-api-access-nrzgn\") pod \"8549fb56-bc75-4c66-8900-ba62a687ce0e\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.174594 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-utilities\") pod \"8549fb56-bc75-4c66-8900-ba62a687ce0e\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.174886 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-catalog-content\") pod \"8549fb56-bc75-4c66-8900-ba62a687ce0e\" (UID: \"8549fb56-bc75-4c66-8900-ba62a687ce0e\") " Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.176019 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-utilities" (OuterVolumeSpecName: "utilities") pod "8549fb56-bc75-4c66-8900-ba62a687ce0e" (UID: "8549fb56-bc75-4c66-8900-ba62a687ce0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.190005 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8549fb56-bc75-4c66-8900-ba62a687ce0e-kube-api-access-nrzgn" (OuterVolumeSpecName: "kube-api-access-nrzgn") pod "8549fb56-bc75-4c66-8900-ba62a687ce0e" (UID: "8549fb56-bc75-4c66-8900-ba62a687ce0e"). InnerVolumeSpecName "kube-api-access-nrzgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.233589 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8549fb56-bc75-4c66-8900-ba62a687ce0e" (UID: "8549fb56-bc75-4c66-8900-ba62a687ce0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.277919 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.277975 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrzgn\" (UniqueName: \"kubernetes.io/projected/8549fb56-bc75-4c66-8900-ba62a687ce0e-kube-api-access-nrzgn\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.277996 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8549fb56-bc75-4c66-8900-ba62a687ce0e-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.527601 4778 generic.go:334] "Generic (PLEG): container finished" podID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerID="fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3" exitCode=0 Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.527654 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrpx" event={"ID":"8549fb56-bc75-4c66-8900-ba62a687ce0e","Type":"ContainerDied","Data":"fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3"} Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.527669 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdrpx" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.527698 4778 scope.go:117] "RemoveContainer" containerID="fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.527685 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdrpx" event={"ID":"8549fb56-bc75-4c66-8900-ba62a687ce0e","Type":"ContainerDied","Data":"74af807183905ac63573678f5f2eb3366ccf037a22980bf197b990c23186ec82"} Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.561073 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrpx"] Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.569991 4778 scope.go:117] "RemoveContainer" containerID="808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.576204 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdrpx"] Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.599469 4778 scope.go:117] "RemoveContainer" containerID="9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.665879 4778 scope.go:117] "RemoveContainer" containerID="fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3" Mar 12 15:13:26 crc kubenswrapper[4778]: E0312 15:13:26.666571 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3\": container with ID starting with fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3 not found: ID does not exist" containerID="fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.666609 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3"} err="failed to get container status \"fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3\": rpc error: code = NotFound desc = could not find container \"fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3\": container with ID starting with fc619cbf16d89a1d02455527225255d2e2cdb8ec4ab739a1921cf7f4ffd16ae3 not found: ID does not exist" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.666634 4778 scope.go:117] "RemoveContainer" containerID="808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762" Mar 12 15:13:26 crc kubenswrapper[4778]: E0312 15:13:26.667100 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762\": container with ID starting with 808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762 not found: ID does not exist" containerID="808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.667132 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762"} err="failed to get container status \"808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762\": rpc error: code = NotFound desc = could not find container \"808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762\": container with ID starting with 808d468b4609825642a3f2c61ee5a962c7a86572772b7af2e05f7c2ee27fe762 not found: ID does not exist" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.667149 4778 scope.go:117] "RemoveContainer" containerID="9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8" Mar 12 15:13:26 crc kubenswrapper[4778]: E0312 15:13:26.667633 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8\": container with ID starting with 9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8 not found: ID does not exist" containerID="9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8" Mar 12 15:13:26 crc kubenswrapper[4778]: I0312 15:13:26.667660 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8"} err="failed to get container status \"9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8\": rpc error: code = NotFound desc = could not find container \"9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8\": container with ID starting with 9eba7d7cfb68560047065d50b50e472ab61f98cf3b77044f3ee6ed1c9751b8d8 not found: ID does not exist" Mar 12 15:13:28 crc kubenswrapper[4778]: I0312 15:13:28.266294 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" path="/var/lib/kubelet/pods/8549fb56-bc75-4c66-8900-ba62a687ce0e/volumes" Mar 12 15:13:35 crc kubenswrapper[4778]: I0312 15:13:35.254376 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:13:35 crc kubenswrapper[4778]: I0312 15:13:35.624119 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"9c0ffa691d48b1023164222bd8c69a88e4e7a89d268ba03833dc6ae4ab4b44b3"} Mar 12 15:13:47 crc kubenswrapper[4778]: I0312 15:13:47.675564 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/extract/0.log" Mar 12 15:13:58 crc kubenswrapper[4778]: I0312 15:13:58.975960 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-6h2c2_ffb8a1f4-4533-4368-a900-95d37fe1d3ad/manager/0.log" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.150001 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555474-s5qjz"] Mar 12 15:14:00 crc kubenswrapper[4778]: E0312 15:14:00.151120 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerName="extract-content" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.151137 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerName="extract-content" Mar 12 15:14:00 crc kubenswrapper[4778]: E0312 15:14:00.151168 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerName="extract-utilities" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.151214 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerName="extract-utilities" Mar 12 15:14:00 crc kubenswrapper[4778]: E0312 15:14:00.151234 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.151242 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4778]: E0312 15:14:00.151257 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerName="extract-content" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.151289 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerName="extract-content" Mar 12 15:14:00 crc kubenswrapper[4778]: E0312 15:14:00.151319 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerName="extract-utilities" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.151327 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerName="extract-utilities" Mar 12 15:14:00 crc kubenswrapper[4778]: E0312 15:14:00.151377 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.151387 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.151722 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="8549fb56-bc75-4c66-8900-ba62a687ce0e" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.151785 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="690208cb-cbdb-488c-9998-70cf01f1cc05" containerName="registry-server" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.152938 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555474-s5qjz" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.155934 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.156330 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.156473 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.158159 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555474-s5qjz"] Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.272221 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jq6\" (UniqueName: \"kubernetes.io/projected/c0be289e-351f-4101-acbd-0127a4b295dc-kube-api-access-v2jq6\") pod \"auto-csr-approver-29555474-s5qjz\" (UID: \"c0be289e-351f-4101-acbd-0127a4b295dc\") " pod="openshift-infra/auto-csr-approver-29555474-s5qjz" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.373751 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jq6\" (UniqueName: \"kubernetes.io/projected/c0be289e-351f-4101-acbd-0127a4b295dc-kube-api-access-v2jq6\") pod \"auto-csr-approver-29555474-s5qjz\" (UID: \"c0be289e-351f-4101-acbd-0127a4b295dc\") " pod="openshift-infra/auto-csr-approver-29555474-s5qjz" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.397892 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jq6\" (UniqueName: \"kubernetes.io/projected/c0be289e-351f-4101-acbd-0127a4b295dc-kube-api-access-v2jq6\") pod \"auto-csr-approver-29555474-s5qjz\" (UID: \"c0be289e-351f-4101-acbd-0127a4b295dc\") " pod="openshift-infra/auto-csr-approver-29555474-s5qjz" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.476833 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555474-s5qjz" Mar 12 15:14:00 crc kubenswrapper[4778]: I0312 15:14:00.957673 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555474-s5qjz"] Mar 12 15:14:01 crc kubenswrapper[4778]: I0312 15:14:01.282673 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-xm4cc_c8818ac0-af8b-42c9-a923-425fe79ed203/manager/0.log" Mar 12 15:14:01 crc kubenswrapper[4778]: I0312 15:14:01.715336 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-9n6jv_ad531191-d7c5-4ef6-9929-3a5869751d98/manager/0.log" Mar 12 15:14:01 crc kubenswrapper[4778]: I0312 15:14:01.889617 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555474-s5qjz" event={"ID":"c0be289e-351f-4101-acbd-0127a4b295dc","Type":"ContainerStarted","Data":"80ccd19c20c4529fd068dccf500b635d95b0d63bbb39a3c84b2d03bf76b7d944"} Mar 12 15:14:02 crc kubenswrapper[4778]: I0312 15:14:02.159290 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-gknp2_db7f6b97-2903-44bf-803f-c00c337400b9/manager/0.log" Mar 12 15:14:02 crc kubenswrapper[4778]: I0312 15:14:02.559223 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-b7tkm_e290c1ea-a39d-451e-a24b-17a2b61ff6f0/manager/0.log" Mar 12 15:14:02 crc kubenswrapper[4778]: I0312 15:14:02.901628 4778 generic.go:334] "Generic (PLEG): container finished" podID="c0be289e-351f-4101-acbd-0127a4b295dc" containerID="deb89f96ad2640fa0674d82f73344504fdcc846f9e4815ae8eef2ce9a216dca5" exitCode=0 Mar 12 15:14:02 crc kubenswrapper[4778]: I0312 15:14:02.901707 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555474-s5qjz" event={"ID":"c0be289e-351f-4101-acbd-0127a4b295dc","Type":"ContainerDied","Data":"deb89f96ad2640fa0674d82f73344504fdcc846f9e4815ae8eef2ce9a216dca5"} Mar 12 15:14:02 crc kubenswrapper[4778]: I0312 15:14:02.938941 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-4jgt8_4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71/manager/0.log" Mar 12 15:14:03 crc kubenswrapper[4778]: I0312 15:14:03.697480 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-5d6qz_02bc06ca-f4e6-4fde-bd5d-882714d9652c/manager/0.log" Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.121717 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-qb8s8_98a4cfbd-3037-48b5-9047-5d574dcc0aca/manager/0.log" Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.278208 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555474-s5qjz" Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.377813 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2jq6\" (UniqueName: \"kubernetes.io/projected/c0be289e-351f-4101-acbd-0127a4b295dc-kube-api-access-v2jq6\") pod \"c0be289e-351f-4101-acbd-0127a4b295dc\" (UID: \"c0be289e-351f-4101-acbd-0127a4b295dc\") " Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.383456 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0be289e-351f-4101-acbd-0127a4b295dc-kube-api-access-v2jq6" (OuterVolumeSpecName: "kube-api-access-v2jq6") pod "c0be289e-351f-4101-acbd-0127a4b295dc" (UID: "c0be289e-351f-4101-acbd-0127a4b295dc"). InnerVolumeSpecName "kube-api-access-v2jq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.479591 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2jq6\" (UniqueName: \"kubernetes.io/projected/c0be289e-351f-4101-acbd-0127a4b295dc-kube-api-access-v2jq6\") on node \"crc\" DevicePath \"\"" Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.614600 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-7dxdh_7e02c37f-b9af-46c9-a743-03ead9b060db/manager/0.log" Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.925961 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555474-s5qjz" event={"ID":"c0be289e-351f-4101-acbd-0127a4b295dc","Type":"ContainerDied","Data":"80ccd19c20c4529fd068dccf500b635d95b0d63bbb39a3c84b2d03bf76b7d944"} Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.926300 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ccd19c20c4529fd068dccf500b635d95b0d63bbb39a3c84b2d03bf76b7d944" Mar 12 15:14:04 crc kubenswrapper[4778]: I0312 15:14:04.926040 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555474-s5qjz" Mar 12 15:14:05 crc kubenswrapper[4778]: I0312 15:14:05.031685 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-pn8tk_5e38a4fd-95f8-437b-923b-eca33b1387e6/manager/0.log" Mar 12 15:14:05 crc kubenswrapper[4778]: I0312 15:14:05.356545 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555468-tpk68"] Mar 12 15:14:05 crc kubenswrapper[4778]: I0312 15:14:05.366108 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555468-tpk68"] Mar 12 15:14:05 crc kubenswrapper[4778]: I0312 15:14:05.445306 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-jlbft_2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f/manager/0.log" Mar 12 15:14:05 crc kubenswrapper[4778]: I0312 15:14:05.887978 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-dd2ft_076835c9-352b-4e40-80c4-3bce3bb80594/manager/0.log" Mar 12 15:14:06 crc kubenswrapper[4778]: I0312 15:14:06.271667 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1760b72d-ab0b-489f-b263-7279ce51dc5f" path="/var/lib/kubelet/pods/1760b72d-ab0b-489f-b263-7279ce51dc5f/volumes" Mar 12 15:14:06 crc kubenswrapper[4778]: I0312 15:14:06.423472 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-686d5f9fbd-vv9rc_d7288cc6-4247-4d03-bd37-9862243bf613/manager/0.log" Mar 12 15:14:06 crc kubenswrapper[4778]: I0312 15:14:06.879429 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-cdgg9_1a01d06c-be6f-45de-a22d-c8f1058a3a84/manager/0.log" Mar 12 15:14:07 crc kubenswrapper[4778]: I0312 15:14:07.287371 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6_4f7d316e-6896-4f84-8423-6f79778c1c6b/manager/0.log" Mar 12 15:14:07 crc kubenswrapper[4778]: I0312 15:14:07.935276 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bc4df7446-x9bsl_34bbdc16-4518-4ee5-9a70-3cedcc5f0159/operator/0.log" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.100486 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ptp4v"] Mar 12 15:14:09 crc kubenswrapper[4778]: E0312 15:14:09.101449 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0be289e-351f-4101-acbd-0127a4b295dc" containerName="oc" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.101473 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0be289e-351f-4101-acbd-0127a4b295dc" containerName="oc" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.103787 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0be289e-351f-4101-acbd-0127a4b295dc" containerName="oc" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.108486 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.119713 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptp4v"] Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.269412 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m98t\" (UniqueName: \"kubernetes.io/projected/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-kube-api-access-6m98t\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.269579 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-utilities\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.269697 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-catalog-content\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.370979 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-utilities\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.371107 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-catalog-content\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.371303 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m98t\" (UniqueName: \"kubernetes.io/projected/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-kube-api-access-6m98t\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.371616 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-catalog-content\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.372253 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-utilities\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.390537 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m98t\" (UniqueName: \"kubernetes.io/projected/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-kube-api-access-6m98t\") pod \"redhat-operators-ptp4v\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.434856 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.588024 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5785b7957-7vdgw_d0784623-5f08-4109-9c7e-0a329210ce07/manager/0.log" Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.931775 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ptp4v"] Mar 12 15:14:09 crc kubenswrapper[4778]: I0312 15:14:09.970571 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptp4v" event={"ID":"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d","Type":"ContainerStarted","Data":"04acc9cfa9e3d6a97a649b3f446aec567265c61efd313e0db14406ccbd638aff"} Mar 12 15:14:10 crc kubenswrapper[4778]: I0312 15:14:10.044309 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-b2fsv_748546a6-1355-470f-b8d0-de395cf3f681/registry-server/0.log" Mar 12 15:14:10 crc kubenswrapper[4778]: I0312 15:14:10.456243 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-bbgmb_8d38fd7e-6fa1-4b0c-9c82-9c57290c7837/manager/0.log" Mar 12 15:14:10 crc kubenswrapper[4778]: I0312 15:14:10.902235 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-wvpf8_52524252-25bd-49e5-822e-3d4668aff2f9/manager/0.log" Mar 12 15:14:10 crc kubenswrapper[4778]: I0312 15:14:10.979795 4778 generic.go:334] "Generic (PLEG): container finished" podID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerID="217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e" exitCode=0 Mar 12 15:14:10 crc kubenswrapper[4778]: I0312 15:14:10.979832 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptp4v" event={"ID":"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d","Type":"ContainerDied","Data":"217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e"} Mar 12 15:14:11 crc kubenswrapper[4778]: I0312 15:14:11.316613 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-shf7b_034f39d8-a33e-4e37-bcde-51fb22debdd1/operator/0.log" Mar 12 15:14:11 crc kubenswrapper[4778]: I0312 15:14:11.754843 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-84mps_64a36384-f2e6-4077-b2ca-de2a6ce6ea06/manager/0.log" Mar 12 15:14:11 crc kubenswrapper[4778]: I0312 15:14:11.993098 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptp4v" event={"ID":"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d","Type":"ContainerStarted","Data":"dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1"} Mar 12 15:14:12 crc kubenswrapper[4778]: I0312 15:14:12.237150 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-gfv5z_6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c/manager/0.log" Mar 12 15:14:12 crc kubenswrapper[4778]: I0312 15:14:12.687440 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-pcfrz_ed9b9271-4ae9-440a-9411-15d46267106e/manager/0.log" Mar 12 15:14:13 crc kubenswrapper[4778]: I0312 15:14:13.143687 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-2tjsk_8c02ecb8-0e15-4672-823a-c4437ca5bf8c/manager/0.log" Mar 12 15:14:15 crc kubenswrapper[4778]: I0312 15:14:15.030625 4778 generic.go:334] "Generic (PLEG): container finished" podID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerID="dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1" exitCode=0 Mar 12 15:14:15 crc kubenswrapper[4778]: I0312 15:14:15.030705 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptp4v" event={"ID":"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d","Type":"ContainerDied","Data":"dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1"} Mar 12 15:14:16 crc kubenswrapper[4778]: I0312 15:14:16.040949 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptp4v" event={"ID":"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d","Type":"ContainerStarted","Data":"d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2"} Mar 12 15:14:16 crc kubenswrapper[4778]: I0312 15:14:16.060820 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ptp4v" podStartSLOduration=2.479271506 podStartE2EDuration="7.060801443s" podCreationTimestamp="2026-03-12 15:14:09 +0000 UTC" firstStartedPulling="2026-03-12 15:14:10.982500545 +0000 UTC m=+7469.431195941" lastFinishedPulling="2026-03-12 15:14:15.564030472 +0000 UTC m=+7474.012725878" observedRunningTime="2026-03-12 15:14:16.060632608 +0000 UTC m=+7474.509328004" watchObservedRunningTime="2026-03-12 15:14:16.060801443 +0000 UTC m=+7474.509496839" Mar 12 15:14:18 crc kubenswrapper[4778]: I0312 15:14:18.081756 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86cb765474-5pq5z_6bd172c5-383f-4273-98a5-2c92223dc765/barbican-api-log/0.log" Mar 12 15:14:18 crc kubenswrapper[4778]: I0312 15:14:18.910611 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65c9994dfd-xznqh_8ee1f546-8428-4b23-93e4-b8370fd4224b/barbican-keystone-listener-log/0.log" Mar 12 15:14:19 crc kubenswrapper[4778]: I0312 15:14:19.435482 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:19 crc kubenswrapper[4778]: I0312 15:14:19.435535 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:19 crc kubenswrapper[4778]: I0312 15:14:19.463797 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dcf9787-ngc87_d505bb59-3c9e-4cfa-891c-c8e0068e2567/barbican-worker-log/0.log" Mar 12 15:14:20 crc kubenswrapper[4778]: I0312 15:14:20.040216 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx_b99627a8-43d8-4f7d-90f7-530eda3c2213/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:20 crc kubenswrapper[4778]: I0312 15:14:20.501223 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ptp4v" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="registry-server" probeResult="failure" output=< Mar 12 15:14:20 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 15:14:20 crc kubenswrapper[4778]: > Mar 12 15:14:20 crc kubenswrapper[4778]: I0312 15:14:20.663057 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/ceilometer-central-agent/0.log" Mar 12 15:14:21 crc kubenswrapper[4778]: I0312 15:14:21.160743 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_99f72014-50e8-4dd4-9764-1b2c7d546b30/cinder-api-log/0.log" Mar 12 15:14:21 crc kubenswrapper[4778]: I0312 15:14:21.693820 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_39ee2404-53a8-4598-8c4b-c3a34fbf3480/cinder-scheduler/0.log" Mar 12 15:14:22 crc kubenswrapper[4778]: I0312 15:14:22.222812 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4szjl_5c5541f3-fb44-476b-91c2-b07dffe50894/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:22 crc kubenswrapper[4778]: I0312 15:14:22.750021 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6_36bb4acd-fab3-4998-a8cd-a6ebcc800fc8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:24 crc kubenswrapper[4778]: I0312 15:14:24.221130 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f89cfcd7f-vk6h4_46f34397-57fe-425d-b69d-040f4384ac69/dnsmasq-dns/0.log" Mar 12 15:14:24 crc kubenswrapper[4778]: I0312 15:14:24.731314 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2xksx_96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:25 crc kubenswrapper[4778]: I0312 15:14:25.171090 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_81c1a05c-5642-43d4-8a7b-229330168332/glance-log/0.log" Mar 12 15:14:25 crc kubenswrapper[4778]: I0312 15:14:25.654587 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa757af-1c91-4b93-8916-5bbd99b8522e/glance-log/0.log" Mar 12 15:14:26 crc kubenswrapper[4778]: I0312 15:14:26.119839 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bngcx_f69e6cfe-f7c2-4127-b4df-710725c52227/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:26 crc kubenswrapper[4778]: I0312 15:14:26.504345 4778 scope.go:117] "RemoveContainer" containerID="98229a921540253e11a1f90e715794eabbd7a1c19952afa6969c95cd62f6a069" Mar 12 15:14:26 crc kubenswrapper[4778]: I0312 15:14:26.755838 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g252n_29f8609b-4a3b-42ba-9450-a2b633bb4c2c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:28 crc kubenswrapper[4778]: I0312 15:14:28.358608 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b6dc4885-6lrlq_a56bb599-f10d-4564-b6bf-48128dc2c7f1/keystone-api/0.log" Mar 12 15:14:29 crc kubenswrapper[4778]: I0312 15:14:29.486619 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:29 crc kubenswrapper[4778]: I0312 15:14:29.542158 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:29 crc kubenswrapper[4778]: I0312 15:14:29.720636 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptp4v"] Mar 12 15:14:29 crc kubenswrapper[4778]: I0312 15:14:29.887281 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b6dc4885-z4h9m_16dea17b-eaa4-4bbf-8895-c077b3e28d66/keystone-api/0.log" Mar 12 15:14:30 crc kubenswrapper[4778]: I0312 15:14:30.353541 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555401-vjgkl_e4df6927-3452-4b36-b59a-a1fdcd4272a4/keystone-cron/0.log" Mar 12 15:14:30 crc kubenswrapper[4778]: I0312 15:14:30.821534 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555461-lmqk9_ebdf3274-70cb-4083-bf12-5d1038a9b7ba/keystone-cron/0.log" Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.188156 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ptp4v" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="registry-server" containerID="cri-o://d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2" gracePeriod=2 Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.266031 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_51f24fcd-aff5-4785-abf7-4936180cee78/kube-state-metrics/0.log" Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.729835 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8_8713b951-b516-42bd-9286-4343e5bcc955/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.739654 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.917711 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-catalog-content\") pod \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.917904 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-utilities\") pod \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.917966 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m98t\" (UniqueName: \"kubernetes.io/projected/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-kube-api-access-6m98t\") pod \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\" (UID: \"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d\") " Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.918971 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-utilities" (OuterVolumeSpecName: "utilities") pod "0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" (UID: "0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:14:31 crc kubenswrapper[4778]: I0312 15:14:31.924389 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-kube-api-access-6m98t" (OuterVolumeSpecName: "kube-api-access-6m98t") pod "0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" (UID: "0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d"). InnerVolumeSpecName "kube-api-access-6m98t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.020731 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.020777 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m98t\" (UniqueName: \"kubernetes.io/projected/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-kube-api-access-6m98t\") on node \"crc\" DevicePath \"\"" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.083993 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" (UID: "0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.122198 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.205683 4778 generic.go:334] "Generic (PLEG): container finished" podID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerID="d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2" exitCode=0 Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.205737 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptp4v" event={"ID":"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d","Type":"ContainerDied","Data":"d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2"} Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.205753 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ptp4v" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.205775 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ptp4v" event={"ID":"0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d","Type":"ContainerDied","Data":"04acc9cfa9e3d6a97a649b3f446aec567265c61efd313e0db14406ccbd638aff"} Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.205802 4778 scope.go:117] "RemoveContainer" containerID="d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.257482 4778 scope.go:117] "RemoveContainer" containerID="dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.280744 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ptp4v"] Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.286435 4778 scope.go:117] "RemoveContainer" containerID="217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.287664 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ptp4v"] Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.337465 4778 scope.go:117] "RemoveContainer" containerID="d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2" Mar 12 15:14:32 crc kubenswrapper[4778]: E0312 15:14:32.338044 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2\": container with ID starting with d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2 not found: ID does not exist" containerID="d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.338101 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2"} err="failed to get container status \"d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2\": rpc error: code = NotFound desc = could not find container \"d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2\": container with ID starting with d16e27ae8470c7352f41095add77058c0bdaf9645d86b1b180d532da10b9aee2 not found: ID does not exist" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.338138 4778 scope.go:117] "RemoveContainer" containerID="dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1" Mar 12 15:14:32 crc kubenswrapper[4778]: E0312 15:14:32.338585 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1\": container with ID starting with dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1 not found: ID does not exist" containerID="dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.338649 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1"} err="failed to get container status \"dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1\": rpc error: code = NotFound desc = could not find container \"dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1\": container with ID starting with dc5279aa56d851d1367ade7841dcd4e8d413b3295acaa8381ae6f7bf32d030c1 not found: ID does not exist" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.338679 4778 scope.go:117] "RemoveContainer" containerID="217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e" Mar 12 15:14:32 crc kubenswrapper[4778]: E0312 15:14:32.339127 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e\": container with ID starting with 217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e not found: ID does not exist" containerID="217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e" Mar 12 15:14:32 crc kubenswrapper[4778]: I0312 15:14:32.339216 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e"} err="failed to get container status \"217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e\": rpc error: code = NotFound desc = could not find container \"217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e\": container with ID starting with 217a23ca1a727d114e2cf1fc5f98b8b425bc25bc2bf661e3774449d8591c7a7e not found: ID does not exist" Mar 12 15:14:33 crc kubenswrapper[4778]: I0312 15:14:33.115439 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ec63cc68-6fde-419b-973c-91fc982e6a49/memcached/0.log" Mar 12 15:14:34 crc kubenswrapper[4778]: I0312 15:14:34.272143 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" path="/var/lib/kubelet/pods/0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d/volumes" Mar 12 15:14:37 crc kubenswrapper[4778]: I0312 15:14:37.321749 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-dggmh_7596a69e-33c9-4a2b-89fc-e4c41252b3fd/neutron-api/0.log" Mar 12 15:14:41 crc kubenswrapper[4778]: I0312 15:14:41.483256 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-zx97x_8a67d4b7-d8eb-40f4-b51d-62e92c6042c1/neutron-api/0.log" Mar 12 15:14:41 crc kubenswrapper[4778]: I0312 15:14:41.987659 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg_5cc410de-5b42-44d1-8b29-37161475730e/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:44 crc kubenswrapper[4778]: I0312 15:14:44.021790 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4/nova-api-log/0.log" Mar 12 15:14:45 crc kubenswrapper[4778]: I0312 15:14:45.753926 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-1_f0341d80-4327-4c9e-bc11-0cddbc6eab66/nova-api-log/0.log" Mar 12 15:14:46 crc kubenswrapper[4778]: I0312 15:14:46.792287 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_929bb450-949d-4f4f-9c21-de6c3fe32927/nova-cell0-conductor-conductor/0.log" Mar 12 15:14:47 crc kubenswrapper[4778]: I0312 15:14:47.516844 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1466aea3-fa10-49a6-a254-a96a52091aca/nova-cell1-conductor-conductor/0.log" Mar 12 15:14:48 crc kubenswrapper[4778]: I0312 15:14:48.098103 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-metadata-0_c289a520-78eb-433f-b7a4-0c03be917c18/nova-cell1-metadata-log/0.log" Mar 12 15:14:48 crc kubenswrapper[4778]: I0312 15:14:48.686627 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 15:14:49 crc kubenswrapper[4778]: I0312 15:14:49.285527 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5tw6s_6ed77f87-e6b2-4c7a-8b0e-003106200dc8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:50 crc kubenswrapper[4778]: I0312 15:14:50.686525 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f613745b-fe33-4918-9e0a-da2a59c55e33/nova-scheduler-scheduler/0.log" Mar 12 15:14:51 crc kubenswrapper[4778]: I0312 15:14:51.157221 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fe52f8ba-9053-4733-b2e3-8f1becf437c8/galera/0.log" Mar 12 15:14:51 crc kubenswrapper[4778]: I0312 15:14:51.639156 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_663feb48-0ed1-4947-97c3-e0bac206fdb2/galera/0.log" Mar 12 15:14:52 crc kubenswrapper[4778]: I0312 15:14:52.076311 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_856cd6d1-db21-4503-94d7-cbf27ca96cc2/openstackclient/0.log" Mar 12 15:14:52 crc kubenswrapper[4778]: I0312 15:14:52.576565 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4wct6_3b8efd1e-884d-4963-b69f-04ede0a92267/ovn-controller/0.log" Mar 12 15:14:53 crc kubenswrapper[4778]: I0312 15:14:53.052343 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vtt4z_a8484e5d-6f77-407c-81db-0d9b2a6b37fd/openstack-network-exporter/0.log" Mar 12 15:14:53 crc kubenswrapper[4778]: I0312 15:14:53.516625 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovsdb-server/0.log" Mar 12 15:14:53 crc kubenswrapper[4778]: I0312 15:14:53.990447 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9lbdq_3c0a2200-506d-4ac3-b08c-9b3156c9e573/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:54 crc kubenswrapper[4778]: I0312 15:14:54.417021 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b25f9c9-784a-4a52-9bb3-02c6c4592702/ovn-northd/0.log" Mar 12 15:14:54 crc kubenswrapper[4778]: I0312 15:14:54.833176 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7321e15e-673c-4e0d-80f8-6ac644c1940f/ovsdbserver-nb/0.log" Mar 12 15:14:55 crc kubenswrapper[4778]: I0312 15:14:55.293024 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c951c6f-06fd-4793-a95b-26b5c1400d73/ovsdbserver-sb/0.log" Mar 12 15:14:56 crc kubenswrapper[4778]: I0312 15:14:56.245385 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4d765698-l7bjx_267e7df2-d35c-45c4-af65-e8af31f8f6cf/placement-log/0.log" Mar 12 15:14:56 crc kubenswrapper[4778]: I0312 15:14:56.860976 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03/rabbitmq/0.log" Mar 12 15:14:57 crc kubenswrapper[4778]: I0312 15:14:57.416625 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e89dfcc-2ac3-444c-91e8-56991eae096b/rabbitmq/0.log" Mar 12 15:14:57 crc kubenswrapper[4778]: I0312 15:14:57.922709 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc_43a3ffe4-8b64-4e26-b63a-5254a986e4a4/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:58 crc kubenswrapper[4778]: I0312 15:14:58.366997 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc_bd7ac6b4-5600-45ce-b0ea-199dd4baefcb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:58 crc kubenswrapper[4778]: I0312 15:14:58.803691 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gt58t_b0bb06df-44bb-4939-9492-a6ad3d6b5368/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:14:59 crc kubenswrapper[4778]: I0312 15:14:59.238754 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8mmjm_c993b33e-6c36-4524-864a-65da461a8e0c/ssh-known-hosts-edpm-deployment/0.log" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.006947 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f887c49f-fw2qd_bbd76cb8-462f-4e60-b755-ef3170e70d11/proxy-httpd/0.log" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.147525 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k"] Mar 12 15:15:00 crc kubenswrapper[4778]: E0312 15:15:00.148176 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="extract-utilities" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.148271 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="extract-utilities" Mar 12 15:15:00 crc kubenswrapper[4778]: E0312 15:15:00.148350 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="registry-server" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.148407 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="registry-server" Mar 12 15:15:00 crc kubenswrapper[4778]: E0312 15:15:00.148505 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="extract-content" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.148821 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="extract-content" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.149066 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6c3f2d-9c9f-406c-86ba-1eee5ef9228d" containerName="registry-server" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.149819 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.152602 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.152866 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.159396 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k"] Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.292536 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92778df-5bc8-42ec-b5c7-6f938cefef60-config-volume\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.292629 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92778df-5bc8-42ec-b5c7-6f938cefef60-secret-volume\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.292785 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgwl\" (UniqueName: \"kubernetes.io/projected/f92778df-5bc8-42ec-b5c7-6f938cefef60-kube-api-access-jsgwl\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.394135 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsgwl\" (UniqueName: \"kubernetes.io/projected/f92778df-5bc8-42ec-b5c7-6f938cefef60-kube-api-access-jsgwl\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.394223 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92778df-5bc8-42ec-b5c7-6f938cefef60-config-volume\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.394302 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92778df-5bc8-42ec-b5c7-6f938cefef60-secret-volume\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.398124 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92778df-5bc8-42ec-b5c7-6f938cefef60-config-volume\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.402024 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92778df-5bc8-42ec-b5c7-6f938cefef60-secret-volume\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.416764 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsgwl\" (UniqueName: \"kubernetes.io/projected/f92778df-5bc8-42ec-b5c7-6f938cefef60-kube-api-access-jsgwl\") pod \"collect-profiles-29555475-ll84k\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.455605 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5knbg_2edc2c90-f91e-402d-809c-514e9d8a5e04/swift-ring-rebalance/0.log" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.480009 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.866202 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-server/0.log" Mar 12 15:15:00 crc kubenswrapper[4778]: I0312 15:15:00.962379 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k"] Mar 12 15:15:01 crc kubenswrapper[4778]: I0312 15:15:01.342118 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s_2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:15:01 crc kubenswrapper[4778]: I0312 15:15:01.533540 4778 generic.go:334] "Generic (PLEG): container finished" podID="f92778df-5bc8-42ec-b5c7-6f938cefef60" containerID="851bf80a56ed9718274664a37b6b3f8f6a2cbf8dbba58e45165c53b43e774224" exitCode=0 Mar 12 15:15:01 crc kubenswrapper[4778]: I0312 15:15:01.533587 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" event={"ID":"f92778df-5bc8-42ec-b5c7-6f938cefef60","Type":"ContainerDied","Data":"851bf80a56ed9718274664a37b6b3f8f6a2cbf8dbba58e45165c53b43e774224"} Mar 12 15:15:01 crc kubenswrapper[4778]: I0312 15:15:01.533612 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" event={"ID":"f92778df-5bc8-42ec-b5c7-6f938cefef60","Type":"ContainerStarted","Data":"2384e90392d7e66ed75d93be62b562a34f6b3f71f59ea646143b70c89b11cc75"} Mar 12 15:15:01 crc kubenswrapper[4778]: I0312 15:15:01.804970 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_74897d0a-ca7b-4589-bd4c-75910c2d491c/tempest-tests-tempest-tests-runner/0.log" Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.261476 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_82246f69-2112-44e9-a783-a4a5926188b4/test-operator-logs-container/0.log" Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.723374 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9glvr_41583476-38cd-4c0d-a05a-96ddc5b330ca/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.825317 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.945550 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92778df-5bc8-42ec-b5c7-6f938cefef60-secret-volume\") pod \"f92778df-5bc8-42ec-b5c7-6f938cefef60\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.945610 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsgwl\" (UniqueName: \"kubernetes.io/projected/f92778df-5bc8-42ec-b5c7-6f938cefef60-kube-api-access-jsgwl\") pod \"f92778df-5bc8-42ec-b5c7-6f938cefef60\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.945750 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92778df-5bc8-42ec-b5c7-6f938cefef60-config-volume\") pod \"f92778df-5bc8-42ec-b5c7-6f938cefef60\" (UID: \"f92778df-5bc8-42ec-b5c7-6f938cefef60\") " Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.947058 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f92778df-5bc8-42ec-b5c7-6f938cefef60-config-volume" (OuterVolumeSpecName: "config-volume") pod "f92778df-5bc8-42ec-b5c7-6f938cefef60" (UID: "f92778df-5bc8-42ec-b5c7-6f938cefef60"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.957591 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92778df-5bc8-42ec-b5c7-6f938cefef60-kube-api-access-jsgwl" (OuterVolumeSpecName: "kube-api-access-jsgwl") pod "f92778df-5bc8-42ec-b5c7-6f938cefef60" (UID: "f92778df-5bc8-42ec-b5c7-6f938cefef60"). InnerVolumeSpecName "kube-api-access-jsgwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:15:02 crc kubenswrapper[4778]: I0312 15:15:02.964161 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92778df-5bc8-42ec-b5c7-6f938cefef60-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f92778df-5bc8-42ec-b5c7-6f938cefef60" (UID: "f92778df-5bc8-42ec-b5c7-6f938cefef60"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 12 15:15:03 crc kubenswrapper[4778]: I0312 15:15:03.049162 4778 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f92778df-5bc8-42ec-b5c7-6f938cefef60-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:03 crc kubenswrapper[4778]: I0312 15:15:03.049311 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsgwl\" (UniqueName: \"kubernetes.io/projected/f92778df-5bc8-42ec-b5c7-6f938cefef60-kube-api-access-jsgwl\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:03 crc kubenswrapper[4778]: I0312 15:15:03.049326 4778 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f92778df-5bc8-42ec-b5c7-6f938cefef60-config-volume\") on node \"crc\" DevicePath \"\"" Mar 12 15:15:03 crc kubenswrapper[4778]: I0312 15:15:03.555236 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" event={"ID":"f92778df-5bc8-42ec-b5c7-6f938cefef60","Type":"ContainerDied","Data":"2384e90392d7e66ed75d93be62b562a34f6b3f71f59ea646143b70c89b11cc75"} Mar 12 15:15:03 crc kubenswrapper[4778]: I0312 15:15:03.555302 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2384e90392d7e66ed75d93be62b562a34f6b3f71f59ea646143b70c89b11cc75" Mar 12 15:15:03 crc kubenswrapper[4778]: I0312 15:15:03.555390 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29555475-ll84k" Mar 12 15:15:03 crc kubenswrapper[4778]: I0312 15:15:03.906308 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl"] Mar 12 15:15:03 crc kubenswrapper[4778]: I0312 15:15:03.906589 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29555430-zhqfl"] Mar 12 15:15:04 crc kubenswrapper[4778]: I0312 15:15:04.264273 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4d57b8-99e5-4955-a6fe-9b0c0a6e61df" path="/var/lib/kubelet/pods/db4d57b8-99e5-4955-a6fe-9b0c0a6e61df/volumes" Mar 12 15:15:26 crc kubenswrapper[4778]: I0312 15:15:26.625491 4778 scope.go:117] "RemoveContainer" containerID="57a58448ac2691d1255487422cd2ce72ba1abcb298bf6c4ed12464fdb32a532d" Mar 12 15:15:40 crc kubenswrapper[4778]: I0312 15:15:40.028890 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/extract/0.log" Mar 12 15:15:51 crc kubenswrapper[4778]: I0312 15:15:51.116980 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-6h2c2_ffb8a1f4-4533-4368-a900-95d37fe1d3ad/manager/0.log" Mar 12 15:15:53 crc kubenswrapper[4778]: I0312 15:15:53.400351 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-xm4cc_c8818ac0-af8b-42c9-a923-425fe79ed203/manager/0.log" Mar 12 15:15:53 crc kubenswrapper[4778]: I0312 15:15:53.807584 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-9n6jv_ad531191-d7c5-4ef6-9929-3a5869751d98/manager/0.log" Mar 12 15:15:54 crc kubenswrapper[4778]: I0312 15:15:54.254010 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-gknp2_db7f6b97-2903-44bf-803f-c00c337400b9/manager/0.log" Mar 12 15:15:54 crc kubenswrapper[4778]: I0312 15:15:54.758555 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-b7tkm_e290c1ea-a39d-451e-a24b-17a2b61ff6f0/manager/0.log" Mar 12 15:15:55 crc kubenswrapper[4778]: I0312 15:15:55.143966 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-4jgt8_4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71/manager/0.log" Mar 12 15:15:55 crc kubenswrapper[4778]: I0312 15:15:55.796459 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-5d6qz_02bc06ca-f4e6-4fde-bd5d-882714d9652c/manager/0.log" Mar 12 15:15:56 crc kubenswrapper[4778]: I0312 15:15:56.229654 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-qb8s8_98a4cfbd-3037-48b5-9047-5d574dcc0aca/manager/0.log" Mar 12 15:15:56 crc kubenswrapper[4778]: I0312 15:15:56.676502 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-7dxdh_7e02c37f-b9af-46c9-a743-03ead9b060db/manager/0.log" Mar 12 15:15:57 crc kubenswrapper[4778]: I0312 15:15:57.132574 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-pn8tk_5e38a4fd-95f8-437b-923b-eca33b1387e6/manager/0.log" Mar 12 15:15:57 crc kubenswrapper[4778]: I0312 15:15:57.522624 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-jlbft_2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f/manager/0.log" Mar 12 15:15:57 crc kubenswrapper[4778]: I0312 15:15:57.990046 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-dd2ft_076835c9-352b-4e40-80c4-3bce3bb80594/manager/0.log" Mar 12 15:15:58 crc kubenswrapper[4778]: I0312 15:15:58.484964 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-686d5f9fbd-vv9rc_d7288cc6-4247-4d03-bd37-9862243bf613/manager/0.log" Mar 12 15:15:58 crc kubenswrapper[4778]: I0312 15:15:58.557663 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:15:58 crc kubenswrapper[4778]: I0312 15:15:58.557725 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:15:58 crc kubenswrapper[4778]: I0312 15:15:58.879094 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-cdgg9_1a01d06c-be6f-45de-a22d-c8f1058a3a84/manager/0.log" Mar 12 15:15:59 crc kubenswrapper[4778]: I0312 15:15:59.249577 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6_4f7d316e-6896-4f84-8423-6f79778c1c6b/manager/0.log" Mar 12 15:15:59 crc kubenswrapper[4778]: I0312 15:15:59.734359 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bc4df7446-x9bsl_34bbdc16-4518-4ee5-9a70-3cedcc5f0159/operator/0.log" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.158799 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555476-5dkdj"] Mar 12 15:16:00 crc kubenswrapper[4778]: E0312 15:16:00.159866 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92778df-5bc8-42ec-b5c7-6f938cefef60" containerName="collect-profiles" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.159961 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92778df-5bc8-42ec-b5c7-6f938cefef60" containerName="collect-profiles" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.160291 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92778df-5bc8-42ec-b5c7-6f938cefef60" containerName="collect-profiles" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.160996 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555476-5dkdj" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.163532 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.163842 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.163884 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.180058 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555476-5dkdj"] Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.205453 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2zc\" (UniqueName: \"kubernetes.io/projected/e42af641-c33e-4b80-899f-98e5d4e78dad-kube-api-access-4r2zc\") pod \"auto-csr-approver-29555476-5dkdj\" (UID: \"e42af641-c33e-4b80-899f-98e5d4e78dad\") " pod="openshift-infra/auto-csr-approver-29555476-5dkdj" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.307421 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2zc\" (UniqueName: \"kubernetes.io/projected/e42af641-c33e-4b80-899f-98e5d4e78dad-kube-api-access-4r2zc\") pod \"auto-csr-approver-29555476-5dkdj\" (UID: \"e42af641-c33e-4b80-899f-98e5d4e78dad\") " pod="openshift-infra/auto-csr-approver-29555476-5dkdj" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.328611 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2zc\" (UniqueName: \"kubernetes.io/projected/e42af641-c33e-4b80-899f-98e5d4e78dad-kube-api-access-4r2zc\") pod \"auto-csr-approver-29555476-5dkdj\" (UID: \"e42af641-c33e-4b80-899f-98e5d4e78dad\") " pod="openshift-infra/auto-csr-approver-29555476-5dkdj" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.484086 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555476-5dkdj" Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.952235 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555476-5dkdj"] Mar 12 15:16:00 crc kubenswrapper[4778]: I0312 15:16:00.966343 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:16:01 crc kubenswrapper[4778]: I0312 15:16:01.139890 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555476-5dkdj" event={"ID":"e42af641-c33e-4b80-899f-98e5d4e78dad","Type":"ContainerStarted","Data":"993bb09d3a882b5a327b311cfd7cd720701e36ad48212d0f63e2c927ffc5b5ff"} Mar 12 15:16:01 crc kubenswrapper[4778]: I0312 15:16:01.465565 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5785b7957-7vdgw_d0784623-5f08-4109-9c7e-0a329210ce07/manager/0.log" Mar 12 15:16:01 crc kubenswrapper[4778]: I0312 15:16:01.863901 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-b2fsv_748546a6-1355-470f-b8d0-de395cf3f681/registry-server/0.log" Mar 12 15:16:02 crc kubenswrapper[4778]: I0312 15:16:02.288899 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-bbgmb_8d38fd7e-6fa1-4b0c-9c82-9c57290c7837/manager/0.log" Mar 12 15:16:02 crc kubenswrapper[4778]: I0312 15:16:02.722176 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-wvpf8_52524252-25bd-49e5-822e-3d4668aff2f9/manager/0.log" Mar 12 15:16:03 crc kubenswrapper[4778]: I0312 15:16:03.146256 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-shf7b_034f39d8-a33e-4e37-bcde-51fb22debdd1/operator/0.log" Mar 12 15:16:03 crc kubenswrapper[4778]: I0312 15:16:03.169960 4778 generic.go:334] "Generic (PLEG): container finished" podID="e42af641-c33e-4b80-899f-98e5d4e78dad" containerID="9e696920a26a473d829a12f6bf276893531dc1bd498cf28bdf23c8b663c144ee" exitCode=0 Mar 12 15:16:03 crc kubenswrapper[4778]: I0312 15:16:03.170008 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555476-5dkdj" event={"ID":"e42af641-c33e-4b80-899f-98e5d4e78dad","Type":"ContainerDied","Data":"9e696920a26a473d829a12f6bf276893531dc1bd498cf28bdf23c8b663c144ee"} Mar 12 15:16:03 crc kubenswrapper[4778]: I0312 15:16:03.520998 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-84mps_64a36384-f2e6-4077-b2ca-de2a6ce6ea06/manager/0.log" Mar 12 15:16:03 crc kubenswrapper[4778]: I0312 15:16:03.937091 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-gfv5z_6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c/manager/0.log" Mar 12 15:16:04 crc kubenswrapper[4778]: I0312 15:16:04.363165 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-pcfrz_ed9b9271-4ae9-440a-9411-15d46267106e/manager/0.log" Mar 12 15:16:04 crc kubenswrapper[4778]: I0312 15:16:04.516016 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555476-5dkdj" Mar 12 15:16:04 crc kubenswrapper[4778]: I0312 15:16:04.594791 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r2zc\" (UniqueName: \"kubernetes.io/projected/e42af641-c33e-4b80-899f-98e5d4e78dad-kube-api-access-4r2zc\") pod \"e42af641-c33e-4b80-899f-98e5d4e78dad\" (UID: \"e42af641-c33e-4b80-899f-98e5d4e78dad\") " Mar 12 15:16:04 crc kubenswrapper[4778]: I0312 15:16:04.601522 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42af641-c33e-4b80-899f-98e5d4e78dad-kube-api-access-4r2zc" (OuterVolumeSpecName: "kube-api-access-4r2zc") pod "e42af641-c33e-4b80-899f-98e5d4e78dad" (UID: "e42af641-c33e-4b80-899f-98e5d4e78dad"). InnerVolumeSpecName "kube-api-access-4r2zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:16:04 crc kubenswrapper[4778]: I0312 15:16:04.697428 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r2zc\" (UniqueName: \"kubernetes.io/projected/e42af641-c33e-4b80-899f-98e5d4e78dad-kube-api-access-4r2zc\") on node \"crc\" DevicePath \"\"" Mar 12 15:16:04 crc kubenswrapper[4778]: I0312 15:16:04.736597 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-2tjsk_8c02ecb8-0e15-4672-823a-c4437ca5bf8c/manager/0.log" Mar 12 15:16:05 crc kubenswrapper[4778]: I0312 15:16:05.189555 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555476-5dkdj" event={"ID":"e42af641-c33e-4b80-899f-98e5d4e78dad","Type":"ContainerDied","Data":"993bb09d3a882b5a327b311cfd7cd720701e36ad48212d0f63e2c927ffc5b5ff"} Mar 12 15:16:05 crc kubenswrapper[4778]: I0312 15:16:05.189935 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="993bb09d3a882b5a327b311cfd7cd720701e36ad48212d0f63e2c927ffc5b5ff" Mar 12 15:16:05 crc kubenswrapper[4778]: I0312 15:16:05.189748 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555476-5dkdj" Mar 12 15:16:05 crc kubenswrapper[4778]: I0312 15:16:05.592948 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555470-65l68"] Mar 12 15:16:05 crc kubenswrapper[4778]: I0312 15:16:05.603677 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555470-65l68"] Mar 12 15:16:06 crc kubenswrapper[4778]: I0312 15:16:06.265092 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f05c65b6-74a0-49ef-8f84-3b4453313dc7" path="/var/lib/kubelet/pods/f05c65b6-74a0-49ef-8f84-3b4453313dc7/volumes" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.921959 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t8fhv/must-gather-wpn7c"] Mar 12 15:16:25 crc kubenswrapper[4778]: E0312 15:16:25.926822 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42af641-c33e-4b80-899f-98e5d4e78dad" containerName="oc" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.926847 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42af641-c33e-4b80-899f-98e5d4e78dad" containerName="oc" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.927119 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42af641-c33e-4b80-899f-98e5d4e78dad" containerName="oc" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.928484 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.938047 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t8fhv"/"openshift-service-ca.crt" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.938304 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-t8fhv"/"kube-root-ca.crt" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.962817 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bskd9\" (UniqueName: \"kubernetes.io/projected/050e068e-c05a-4115-8a20-381ecb7747c6-kube-api-access-bskd9\") pod \"must-gather-wpn7c\" (UID: \"050e068e-c05a-4115-8a20-381ecb7747c6\") " pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.963292 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/050e068e-c05a-4115-8a20-381ecb7747c6-must-gather-output\") pod \"must-gather-wpn7c\" (UID: \"050e068e-c05a-4115-8a20-381ecb7747c6\") " pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:16:25 crc kubenswrapper[4778]: I0312 15:16:25.965084 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t8fhv/must-gather-wpn7c"] Mar 12 15:16:26 crc kubenswrapper[4778]: I0312 15:16:26.065392 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bskd9\" (UniqueName: \"kubernetes.io/projected/050e068e-c05a-4115-8a20-381ecb7747c6-kube-api-access-bskd9\") pod \"must-gather-wpn7c\" (UID: \"050e068e-c05a-4115-8a20-381ecb7747c6\") " pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:16:26 crc kubenswrapper[4778]: I0312 15:16:26.065444 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/050e068e-c05a-4115-8a20-381ecb7747c6-must-gather-output\") pod \"must-gather-wpn7c\" (UID: \"050e068e-c05a-4115-8a20-381ecb7747c6\") " pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:16:26 crc kubenswrapper[4778]: I0312 15:16:26.065991 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/050e068e-c05a-4115-8a20-381ecb7747c6-must-gather-output\") pod \"must-gather-wpn7c\" (UID: \"050e068e-c05a-4115-8a20-381ecb7747c6\") " pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:16:26 crc kubenswrapper[4778]: I0312 15:16:26.084345 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bskd9\" (UniqueName: \"kubernetes.io/projected/050e068e-c05a-4115-8a20-381ecb7747c6-kube-api-access-bskd9\") pod \"must-gather-wpn7c\" (UID: \"050e068e-c05a-4115-8a20-381ecb7747c6\") " pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:16:26 crc kubenswrapper[4778]: I0312 15:16:26.268530 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:16:26 crc kubenswrapper[4778]: I0312 15:16:26.733071 4778 scope.go:117] "RemoveContainer" containerID="66d173277dbb8cde37f4f992e677953055661368f74064cf032011267c61214c" Mar 12 15:16:26 crc kubenswrapper[4778]: I0312 15:16:26.765736 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-t8fhv/must-gather-wpn7c"] Mar 12 15:16:27 crc kubenswrapper[4778]: I0312 15:16:27.406754 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" event={"ID":"050e068e-c05a-4115-8a20-381ecb7747c6","Type":"ContainerStarted","Data":"91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346"} Mar 12 15:16:27 crc kubenswrapper[4778]: I0312 15:16:27.406803 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" event={"ID":"050e068e-c05a-4115-8a20-381ecb7747c6","Type":"ContainerStarted","Data":"c264bbda6eab92a22196a6070c2ebd25ad32a4313adb11844aceb3d6d0e21de8"} Mar 12 15:16:28 crc kubenswrapper[4778]: I0312 15:16:28.421827 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" event={"ID":"050e068e-c05a-4115-8a20-381ecb7747c6","Type":"ContainerStarted","Data":"115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3"} Mar 12 15:16:28 crc kubenswrapper[4778]: I0312 15:16:28.456566 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" podStartSLOduration=3.456533814 podStartE2EDuration="3.456533814s" podCreationTimestamp="2026-03-12 15:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:16:28.440823247 +0000 UTC m=+7606.889518703" watchObservedRunningTime="2026-03-12 15:16:28.456533814 +0000 UTC m=+7606.905229250" Mar 12 15:16:28 crc kubenswrapper[4778]: I0312 15:16:28.558213 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:16:28 crc kubenswrapper[4778]: I0312 15:16:28.558280 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.383018 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-vvl5p"] Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.385992 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.388585 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t8fhv"/"default-dockercfg-h8m7h" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.483095 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4sj\" (UniqueName: \"kubernetes.io/projected/183372ad-b2f3-4b05-af1e-96ce5b752768-kube-api-access-fc4sj\") pod \"crc-debug-vvl5p\" (UID: \"183372ad-b2f3-4b05-af1e-96ce5b752768\") " pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.484521 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/183372ad-b2f3-4b05-af1e-96ce5b752768-host\") pod \"crc-debug-vvl5p\" (UID: \"183372ad-b2f3-4b05-af1e-96ce5b752768\") " pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.586748 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc4sj\" (UniqueName: \"kubernetes.io/projected/183372ad-b2f3-4b05-af1e-96ce5b752768-kube-api-access-fc4sj\") pod \"crc-debug-vvl5p\" (UID: \"183372ad-b2f3-4b05-af1e-96ce5b752768\") " pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.587102 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/183372ad-b2f3-4b05-af1e-96ce5b752768-host\") pod \"crc-debug-vvl5p\" (UID: \"183372ad-b2f3-4b05-af1e-96ce5b752768\") " pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.587421 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/183372ad-b2f3-4b05-af1e-96ce5b752768-host\") pod \"crc-debug-vvl5p\" (UID: \"183372ad-b2f3-4b05-af1e-96ce5b752768\") " pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.606805 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc4sj\" (UniqueName: \"kubernetes.io/projected/183372ad-b2f3-4b05-af1e-96ce5b752768-kube-api-access-fc4sj\") pod \"crc-debug-vvl5p\" (UID: \"183372ad-b2f3-4b05-af1e-96ce5b752768\") " pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:16:31 crc kubenswrapper[4778]: I0312 15:16:31.706065 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:16:31 crc kubenswrapper[4778]: W0312 15:16:31.738108 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod183372ad_b2f3_4b05_af1e_96ce5b752768.slice/crio-9ef0dab72eff1169e4868a63381157ceb8e771a86c14fab3398ea56779a2caf3 WatchSource:0}: Error finding container 9ef0dab72eff1169e4868a63381157ceb8e771a86c14fab3398ea56779a2caf3: Status 404 returned error can't find the container with id 9ef0dab72eff1169e4868a63381157ceb8e771a86c14fab3398ea56779a2caf3 Mar 12 15:16:32 crc kubenswrapper[4778]: I0312 15:16:32.453927 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" event={"ID":"183372ad-b2f3-4b05-af1e-96ce5b752768","Type":"ContainerStarted","Data":"88377826de2b8da64a9c4f2434a9dbd5ccc7cbe21ec7e843bd98b2b4d5469479"} Mar 12 15:16:32 crc kubenswrapper[4778]: I0312 15:16:32.454648 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" event={"ID":"183372ad-b2f3-4b05-af1e-96ce5b752768","Type":"ContainerStarted","Data":"9ef0dab72eff1169e4868a63381157ceb8e771a86c14fab3398ea56779a2caf3"} Mar 12 15:16:32 crc kubenswrapper[4778]: I0312 15:16:32.470319 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" podStartSLOduration=1.470300583 podStartE2EDuration="1.470300583s" podCreationTimestamp="2026-03-12 15:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 15:16:32.465911309 +0000 UTC m=+7610.914606705" watchObservedRunningTime="2026-03-12 15:16:32.470300583 +0000 UTC m=+7610.918995979" Mar 12 15:16:58 crc kubenswrapper[4778]: I0312 15:16:58.558344 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:16:58 crc kubenswrapper[4778]: I0312 15:16:58.558908 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:16:58 crc kubenswrapper[4778]: I0312 15:16:58.558965 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 15:16:58 crc kubenswrapper[4778]: I0312 15:16:58.559802 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c0ffa691d48b1023164222bd8c69a88e4e7a89d268ba03833dc6ae4ab4b44b3"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:16:58 crc kubenswrapper[4778]: I0312 15:16:58.559857 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://9c0ffa691d48b1023164222bd8c69a88e4e7a89d268ba03833dc6ae4ab4b44b3" gracePeriod=600 Mar 12 15:16:59 crc kubenswrapper[4778]: I0312 15:16:59.691659 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="9c0ffa691d48b1023164222bd8c69a88e4e7a89d268ba03833dc6ae4ab4b44b3" exitCode=0 Mar 12 15:16:59 crc kubenswrapper[4778]: I0312 15:16:59.691739 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"9c0ffa691d48b1023164222bd8c69a88e4e7a89d268ba03833dc6ae4ab4b44b3"} Mar 12 15:16:59 crc kubenswrapper[4778]: I0312 15:16:59.692373 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc"} Mar 12 15:16:59 crc kubenswrapper[4778]: I0312 15:16:59.692403 4778 scope.go:117] "RemoveContainer" containerID="0d1560644663063f80ae67feb786777f7400aa5bf8ea2f2418887c809789d930" Mar 12 15:17:14 crc kubenswrapper[4778]: I0312 15:17:14.846534 4778 generic.go:334] "Generic (PLEG): container finished" podID="183372ad-b2f3-4b05-af1e-96ce5b752768" containerID="88377826de2b8da64a9c4f2434a9dbd5ccc7cbe21ec7e843bd98b2b4d5469479" exitCode=0 Mar 12 15:17:14 crc kubenswrapper[4778]: I0312 15:17:14.846664 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" event={"ID":"183372ad-b2f3-4b05-af1e-96ce5b752768","Type":"ContainerDied","Data":"88377826de2b8da64a9c4f2434a9dbd5ccc7cbe21ec7e843bd98b2b4d5469479"} Mar 12 15:17:15 crc kubenswrapper[4778]: I0312 15:17:15.974373 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.013845 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-vvl5p"] Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.020765 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-vvl5p"] Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.164103 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc4sj\" (UniqueName: \"kubernetes.io/projected/183372ad-b2f3-4b05-af1e-96ce5b752768-kube-api-access-fc4sj\") pod \"183372ad-b2f3-4b05-af1e-96ce5b752768\" (UID: \"183372ad-b2f3-4b05-af1e-96ce5b752768\") " Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.164243 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/183372ad-b2f3-4b05-af1e-96ce5b752768-host\") pod \"183372ad-b2f3-4b05-af1e-96ce5b752768\" (UID: \"183372ad-b2f3-4b05-af1e-96ce5b752768\") " Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.164340 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/183372ad-b2f3-4b05-af1e-96ce5b752768-host" (OuterVolumeSpecName: "host") pod "183372ad-b2f3-4b05-af1e-96ce5b752768" (UID: "183372ad-b2f3-4b05-af1e-96ce5b752768"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.164713 4778 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/183372ad-b2f3-4b05-af1e-96ce5b752768-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.169573 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183372ad-b2f3-4b05-af1e-96ce5b752768-kube-api-access-fc4sj" (OuterVolumeSpecName: "kube-api-access-fc4sj") pod "183372ad-b2f3-4b05-af1e-96ce5b752768" (UID: "183372ad-b2f3-4b05-af1e-96ce5b752768"). InnerVolumeSpecName "kube-api-access-fc4sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.264863 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183372ad-b2f3-4b05-af1e-96ce5b752768" path="/var/lib/kubelet/pods/183372ad-b2f3-4b05-af1e-96ce5b752768/volumes" Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.267935 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc4sj\" (UniqueName: \"kubernetes.io/projected/183372ad-b2f3-4b05-af1e-96ce5b752768-kube-api-access-fc4sj\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.869417 4778 scope.go:117] "RemoveContainer" containerID="88377826de2b8da64a9c4f2434a9dbd5ccc7cbe21ec7e843bd98b2b4d5469479" Mar 12 15:17:16 crc kubenswrapper[4778]: I0312 15:17:16.869655 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-vvl5p" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.316001 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-t6zbj"] Mar 12 15:17:17 crc kubenswrapper[4778]: E0312 15:17:17.316794 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183372ad-b2f3-4b05-af1e-96ce5b752768" containerName="container-00" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.316813 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="183372ad-b2f3-4b05-af1e-96ce5b752768" containerName="container-00" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.317070 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="183372ad-b2f3-4b05-af1e-96ce5b752768" containerName="container-00" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.317908 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.321734 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t8fhv"/"default-dockercfg-h8m7h" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.407021 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdk54\" (UniqueName: \"kubernetes.io/projected/709dd41c-fb43-4c5d-9741-f407d99cf786-kube-api-access-hdk54\") pod \"crc-debug-t6zbj\" (UID: \"709dd41c-fb43-4c5d-9741-f407d99cf786\") " pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.407212 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/709dd41c-fb43-4c5d-9741-f407d99cf786-host\") pod \"crc-debug-t6zbj\" (UID: \"709dd41c-fb43-4c5d-9741-f407d99cf786\") " pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.509440 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdk54\" (UniqueName: \"kubernetes.io/projected/709dd41c-fb43-4c5d-9741-f407d99cf786-kube-api-access-hdk54\") pod \"crc-debug-t6zbj\" (UID: \"709dd41c-fb43-4c5d-9741-f407d99cf786\") " pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.509544 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/709dd41c-fb43-4c5d-9741-f407d99cf786-host\") pod \"crc-debug-t6zbj\" (UID: \"709dd41c-fb43-4c5d-9741-f407d99cf786\") " pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.509651 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/709dd41c-fb43-4c5d-9741-f407d99cf786-host\") pod \"crc-debug-t6zbj\" (UID: \"709dd41c-fb43-4c5d-9741-f407d99cf786\") " pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.530880 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdk54\" (UniqueName: \"kubernetes.io/projected/709dd41c-fb43-4c5d-9741-f407d99cf786-kube-api-access-hdk54\") pod \"crc-debug-t6zbj\" (UID: \"709dd41c-fb43-4c5d-9741-f407d99cf786\") " pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.633232 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:17 crc kubenswrapper[4778]: W0312 15:17:17.665308 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod709dd41c_fb43_4c5d_9741_f407d99cf786.slice/crio-eeb34a6e12cb7caf13d724302b91bb072707483f0abcda7540aefe215fbdaa58 WatchSource:0}: Error finding container eeb34a6e12cb7caf13d724302b91bb072707483f0abcda7540aefe215fbdaa58: Status 404 returned error can't find the container with id eeb34a6e12cb7caf13d724302b91bb072707483f0abcda7540aefe215fbdaa58 Mar 12 15:17:17 crc kubenswrapper[4778]: I0312 15:17:17.881218 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" event={"ID":"709dd41c-fb43-4c5d-9741-f407d99cf786","Type":"ContainerStarted","Data":"eeb34a6e12cb7caf13d724302b91bb072707483f0abcda7540aefe215fbdaa58"} Mar 12 15:17:18 crc kubenswrapper[4778]: I0312 15:17:18.894242 4778 generic.go:334] "Generic (PLEG): container finished" podID="709dd41c-fb43-4c5d-9741-f407d99cf786" containerID="bbbdcd9b6771dbb5cbdf7ae29f037ff7ca335ae97a393f2a8c10c715cd6d06ac" exitCode=0 Mar 12 15:17:18 crc kubenswrapper[4778]: I0312 15:17:18.894346 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" event={"ID":"709dd41c-fb43-4c5d-9741-f407d99cf786","Type":"ContainerDied","Data":"bbbdcd9b6771dbb5cbdf7ae29f037ff7ca335ae97a393f2a8c10c715cd6d06ac"} Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.023437 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.061927 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdk54\" (UniqueName: \"kubernetes.io/projected/709dd41c-fb43-4c5d-9741-f407d99cf786-kube-api-access-hdk54\") pod \"709dd41c-fb43-4c5d-9741-f407d99cf786\" (UID: \"709dd41c-fb43-4c5d-9741-f407d99cf786\") " Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.062247 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/709dd41c-fb43-4c5d-9741-f407d99cf786-host\") pod \"709dd41c-fb43-4c5d-9741-f407d99cf786\" (UID: \"709dd41c-fb43-4c5d-9741-f407d99cf786\") " Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.062622 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/709dd41c-fb43-4c5d-9741-f407d99cf786-host" (OuterVolumeSpecName: "host") pod "709dd41c-fb43-4c5d-9741-f407d99cf786" (UID: "709dd41c-fb43-4c5d-9741-f407d99cf786"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.077256 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709dd41c-fb43-4c5d-9741-f407d99cf786-kube-api-access-hdk54" (OuterVolumeSpecName: "kube-api-access-hdk54") pod "709dd41c-fb43-4c5d-9741-f407d99cf786" (UID: "709dd41c-fb43-4c5d-9741-f407d99cf786"). InnerVolumeSpecName "kube-api-access-hdk54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.164357 4778 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/709dd41c-fb43-4c5d-9741-f407d99cf786-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.164382 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdk54\" (UniqueName: \"kubernetes.io/projected/709dd41c-fb43-4c5d-9741-f407d99cf786-kube-api-access-hdk54\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.930301 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" event={"ID":"709dd41c-fb43-4c5d-9741-f407d99cf786","Type":"ContainerDied","Data":"eeb34a6e12cb7caf13d724302b91bb072707483f0abcda7540aefe215fbdaa58"} Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.930623 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb34a6e12cb7caf13d724302b91bb072707483f0abcda7540aefe215fbdaa58" Mar 12 15:17:20 crc kubenswrapper[4778]: I0312 15:17:20.930369 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-t6zbj" Mar 12 15:17:21 crc kubenswrapper[4778]: I0312 15:17:21.239070 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-t6zbj"] Mar 12 15:17:21 crc kubenswrapper[4778]: I0312 15:17:21.248329 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-t6zbj"] Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.263931 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709dd41c-fb43-4c5d-9741-f407d99cf786" path="/var/lib/kubelet/pods/709dd41c-fb43-4c5d-9741-f407d99cf786/volumes" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.465087 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-sf9lm"] Mar 12 15:17:22 crc kubenswrapper[4778]: E0312 15:17:22.465822 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709dd41c-fb43-4c5d-9741-f407d99cf786" containerName="container-00" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.465843 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="709dd41c-fb43-4c5d-9741-f407d99cf786" containerName="container-00" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.466068 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="709dd41c-fb43-4c5d-9741-f407d99cf786" containerName="container-00" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.466832 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.469478 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-t8fhv"/"default-dockercfg-h8m7h" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.506849 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdgwn\" (UniqueName: \"kubernetes.io/projected/d48fd056-e0ab-4645-a4ec-af8315c4c789-kube-api-access-bdgwn\") pod \"crc-debug-sf9lm\" (UID: \"d48fd056-e0ab-4645-a4ec-af8315c4c789\") " pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.506943 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d48fd056-e0ab-4645-a4ec-af8315c4c789-host\") pod \"crc-debug-sf9lm\" (UID: \"d48fd056-e0ab-4645-a4ec-af8315c4c789\") " pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.608116 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdgwn\" (UniqueName: \"kubernetes.io/projected/d48fd056-e0ab-4645-a4ec-af8315c4c789-kube-api-access-bdgwn\") pod \"crc-debug-sf9lm\" (UID: \"d48fd056-e0ab-4645-a4ec-af8315c4c789\") " pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.608174 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d48fd056-e0ab-4645-a4ec-af8315c4c789-host\") pod \"crc-debug-sf9lm\" (UID: \"d48fd056-e0ab-4645-a4ec-af8315c4c789\") " pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.608337 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d48fd056-e0ab-4645-a4ec-af8315c4c789-host\") pod \"crc-debug-sf9lm\" (UID: \"d48fd056-e0ab-4645-a4ec-af8315c4c789\") " pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.625110 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdgwn\" (UniqueName: \"kubernetes.io/projected/d48fd056-e0ab-4645-a4ec-af8315c4c789-kube-api-access-bdgwn\") pod \"crc-debug-sf9lm\" (UID: \"d48fd056-e0ab-4645-a4ec-af8315c4c789\") " pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.865789 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:22 crc kubenswrapper[4778]: I0312 15:17:22.955419 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" event={"ID":"d48fd056-e0ab-4645-a4ec-af8315c4c789","Type":"ContainerStarted","Data":"de2759ba12e8a1c4c30b2c75fa5e913ba606536a3d7f9921e46bf2c30dd4dc93"} Mar 12 15:17:23 crc kubenswrapper[4778]: I0312 15:17:23.964452 4778 generic.go:334] "Generic (PLEG): container finished" podID="d48fd056-e0ab-4645-a4ec-af8315c4c789" containerID="978012010ad73303d32aa324be6cdc810360907cfeab7854465081f99f04817c" exitCode=0 Mar 12 15:17:23 crc kubenswrapper[4778]: I0312 15:17:23.964522 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" event={"ID":"d48fd056-e0ab-4645-a4ec-af8315c4c789","Type":"ContainerDied","Data":"978012010ad73303d32aa324be6cdc810360907cfeab7854465081f99f04817c"} Mar 12 15:17:24 crc kubenswrapper[4778]: I0312 15:17:24.004465 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-sf9lm"] Mar 12 15:17:24 crc kubenswrapper[4778]: I0312 15:17:24.014813 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t8fhv/crc-debug-sf9lm"] Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.072342 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.213928 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdgwn\" (UniqueName: \"kubernetes.io/projected/d48fd056-e0ab-4645-a4ec-af8315c4c789-kube-api-access-bdgwn\") pod \"d48fd056-e0ab-4645-a4ec-af8315c4c789\" (UID: \"d48fd056-e0ab-4645-a4ec-af8315c4c789\") " Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.214064 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d48fd056-e0ab-4645-a4ec-af8315c4c789-host\") pod \"d48fd056-e0ab-4645-a4ec-af8315c4c789\" (UID: \"d48fd056-e0ab-4645-a4ec-af8315c4c789\") " Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.214147 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d48fd056-e0ab-4645-a4ec-af8315c4c789-host" (OuterVolumeSpecName: "host") pod "d48fd056-e0ab-4645-a4ec-af8315c4c789" (UID: "d48fd056-e0ab-4645-a4ec-af8315c4c789"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.215003 4778 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d48fd056-e0ab-4645-a4ec-af8315c4c789-host\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.221516 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48fd056-e0ab-4645-a4ec-af8315c4c789-kube-api-access-bdgwn" (OuterVolumeSpecName: "kube-api-access-bdgwn") pod "d48fd056-e0ab-4645-a4ec-af8315c4c789" (UID: "d48fd056-e0ab-4645-a4ec-af8315c4c789"). InnerVolumeSpecName "kube-api-access-bdgwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.318172 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdgwn\" (UniqueName: \"kubernetes.io/projected/d48fd056-e0ab-4645-a4ec-af8315c4c789-kube-api-access-bdgwn\") on node \"crc\" DevicePath \"\"" Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.986161 4778 scope.go:117] "RemoveContainer" containerID="978012010ad73303d32aa324be6cdc810360907cfeab7854465081f99f04817c" Mar 12 15:17:25 crc kubenswrapper[4778]: I0312 15:17:25.986346 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/crc-debug-sf9lm" Mar 12 15:17:26 crc kubenswrapper[4778]: I0312 15:17:26.266277 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48fd056-e0ab-4645-a4ec-af8315c4c789" path="/var/lib/kubelet/pods/d48fd056-e0ab-4645-a4ec-af8315c4c789/volumes" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.167889 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555478-fcjb2"] Mar 12 15:18:00 crc kubenswrapper[4778]: E0312 15:18:00.169093 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48fd056-e0ab-4645-a4ec-af8315c4c789" containerName="container-00" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.169115 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48fd056-e0ab-4645-a4ec-af8315c4c789" containerName="container-00" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.169514 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48fd056-e0ab-4645-a4ec-af8315c4c789" containerName="container-00" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.170565 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555478-fcjb2" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.175352 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.175445 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.175603 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.186548 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555478-fcjb2"] Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.243586 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggbz\" (UniqueName: \"kubernetes.io/projected/341f8c65-027d-48d0-b0c2-b843867c2413-kube-api-access-2ggbz\") pod \"auto-csr-approver-29555478-fcjb2\" (UID: \"341f8c65-027d-48d0-b0c2-b843867c2413\") " pod="openshift-infra/auto-csr-approver-29555478-fcjb2" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.344990 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggbz\" (UniqueName: \"kubernetes.io/projected/341f8c65-027d-48d0-b0c2-b843867c2413-kube-api-access-2ggbz\") pod \"auto-csr-approver-29555478-fcjb2\" (UID: \"341f8c65-027d-48d0-b0c2-b843867c2413\") " pod="openshift-infra/auto-csr-approver-29555478-fcjb2" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.372030 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggbz\" (UniqueName: \"kubernetes.io/projected/341f8c65-027d-48d0-b0c2-b843867c2413-kube-api-access-2ggbz\") pod \"auto-csr-approver-29555478-fcjb2\" (UID: \"341f8c65-027d-48d0-b0c2-b843867c2413\") " pod="openshift-infra/auto-csr-approver-29555478-fcjb2" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.496268 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555478-fcjb2" Mar 12 15:18:00 crc kubenswrapper[4778]: I0312 15:18:00.959460 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555478-fcjb2"] Mar 12 15:18:01 crc kubenswrapper[4778]: I0312 15:18:01.306472 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555478-fcjb2" event={"ID":"341f8c65-027d-48d0-b0c2-b843867c2413","Type":"ContainerStarted","Data":"dac606d016de3543c499198eb7ee631d3101d78cca57ba65c393a13f9eceedb7"} Mar 12 15:18:01 crc kubenswrapper[4778]: I0312 15:18:01.727156 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86cb765474-5pq5z_6bd172c5-383f-4273-98a5-2c92223dc765/barbican-api/0.log" Mar 12 15:18:01 crc kubenswrapper[4778]: I0312 15:18:01.867009 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86cb765474-5pq5z_6bd172c5-383f-4273-98a5-2c92223dc765/barbican-api-log/0.log" Mar 12 15:18:01 crc kubenswrapper[4778]: I0312 15:18:01.917279 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65c9994dfd-xznqh_8ee1f546-8428-4b23-93e4-b8370fd4224b/barbican-keystone-listener/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.207147 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65c9994dfd-xznqh_8ee1f546-8428-4b23-93e4-b8370fd4224b/barbican-keystone-listener-log/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.222305 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dcf9787-ngc87_d505bb59-3c9e-4cfa-891c-c8e0068e2567/barbican-worker/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.234758 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dcf9787-ngc87_d505bb59-3c9e-4cfa-891c-c8e0068e2567/barbican-worker-log/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.534230 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-ntpnx_b99627a8-43d8-4f7d-90f7-530eda3c2213/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.549104 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/ceilometer-central-agent/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.715809 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/proxy-httpd/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.735503 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/sg-core/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.919898 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9f1d0355-a73a-4a93-94fb-b439436cf1b1/ceilometer-notification-agent/0.log" Mar 12 15:18:02 crc kubenswrapper[4778]: I0312 15:18:02.989698 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_99f72014-50e8-4dd4-9764-1b2c7d546b30/cinder-api-log/0.log" Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.066743 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_99f72014-50e8-4dd4-9764-1b2c7d546b30/cinder-api/0.log" Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.154415 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_39ee2404-53a8-4598-8c4b-c3a34fbf3480/cinder-scheduler/0.log" Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.257591 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_39ee2404-53a8-4598-8c4b-c3a34fbf3480/probe/0.log" Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.343416 4778 generic.go:334] "Generic (PLEG): container finished" podID="341f8c65-027d-48d0-b0c2-b843867c2413" containerID="27e746629157759d4e60a414cb672470c7ab54258b384fb1bc8e845de836c293" exitCode=0 Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.343464 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555478-fcjb2" event={"ID":"341f8c65-027d-48d0-b0c2-b843867c2413","Type":"ContainerDied","Data":"27e746629157759d4e60a414cb672470c7ab54258b384fb1bc8e845de836c293"} Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.359086 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4szjl_5c5541f3-fb44-476b-91c2-b07dffe50894/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.473528 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-jg9z6_36bb4acd-fab3-4998-a8cd-a6ebcc800fc8/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.605289 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f89cfcd7f-vk6h4_46f34397-57fe-425d-b69d-040f4384ac69/init/0.log" Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.763458 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f89cfcd7f-vk6h4_46f34397-57fe-425d-b69d-040f4384ac69/init/0.log" Mar 12 15:18:03 crc kubenswrapper[4778]: I0312 15:18:03.829319 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2xksx_96ba9a1b-ae5f-4b42-b8eb-1f0e3656ae61/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:04 crc kubenswrapper[4778]: I0312 15:18:04.083825 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_81c1a05c-5642-43d4-8a7b-229330168332/glance-log/0.log" Mar 12 15:18:04 crc kubenswrapper[4778]: I0312 15:18:04.119618 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_81c1a05c-5642-43d4-8a7b-229330168332/glance-httpd/0.log" Mar 12 15:18:04 crc kubenswrapper[4778]: I0312 15:18:04.210239 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f89cfcd7f-vk6h4_46f34397-57fe-425d-b69d-040f4384ac69/dnsmasq-dns/0.log" Mar 12 15:18:04 crc kubenswrapper[4778]: I0312 15:18:04.353637 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa757af-1c91-4b93-8916-5bbd99b8522e/glance-httpd/0.log" Mar 12 15:18:04 crc kubenswrapper[4778]: I0312 15:18:04.367093 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_7fa757af-1c91-4b93-8916-5bbd99b8522e/glance-log/0.log" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:04.544819 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bngcx_f69e6cfe-f7c2-4127-b4df-710725c52227/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:04.671623 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-g252n_29f8609b-4a3b-42ba-9450-a2b633bb4c2c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:04.694380 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555478-fcjb2" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:04.847814 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ggbz\" (UniqueName: \"kubernetes.io/projected/341f8c65-027d-48d0-b0c2-b843867c2413-kube-api-access-2ggbz\") pod \"341f8c65-027d-48d0-b0c2-b843867c2413\" (UID: \"341f8c65-027d-48d0-b0c2-b843867c2413\") " Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:04.856317 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/341f8c65-027d-48d0-b0c2-b843867c2413-kube-api-access-2ggbz" (OuterVolumeSpecName: "kube-api-access-2ggbz") pod "341f8c65-027d-48d0-b0c2-b843867c2413" (UID: "341f8c65-027d-48d0-b0c2-b843867c2413"). InnerVolumeSpecName "kube-api-access-2ggbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:04.952198 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ggbz\" (UniqueName: \"kubernetes.io/projected/341f8c65-027d-48d0-b0c2-b843867c2413-kube-api-access-2ggbz\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.089333 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555401-vjgkl_e4df6927-3452-4b36-b59a-a1fdcd4272a4/keystone-cron/0.log" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.316351 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29555461-lmqk9_ebdf3274-70cb-4083-bf12-5d1038a9b7ba/keystone-cron/0.log" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.359866 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555478-fcjb2" event={"ID":"341f8c65-027d-48d0-b0c2-b843867c2413","Type":"ContainerDied","Data":"dac606d016de3543c499198eb7ee631d3101d78cca57ba65c393a13f9eceedb7"} Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.359913 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac606d016de3543c499198eb7ee631d3101d78cca57ba65c393a13f9eceedb7" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.359931 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555478-fcjb2" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.507045 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_51f24fcd-aff5-4785-abf7-4936180cee78/kube-state-metrics/0.log" Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.763703 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555472-k4f9h"] Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.783585 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555472-k4f9h"] Mar 12 15:18:05 crc kubenswrapper[4778]: I0312 15:18:05.900803 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4m9w8_8713b951-b516-42bd-9286-4343e5bcc955/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:06 crc kubenswrapper[4778]: I0312 15:18:06.175494 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b6dc4885-6lrlq_a56bb599-f10d-4564-b6bf-48128dc2c7f1/keystone-api/0.log" Mar 12 15:18:06 crc kubenswrapper[4778]: I0312 15:18:06.263150 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c4e913-d163-4764-8738-ac336cd93df9" path="/var/lib/kubelet/pods/30c4e913-d163-4764-8738-ac336cd93df9/volumes" Mar 12 15:18:06 crc kubenswrapper[4778]: I0312 15:18:06.315917 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b6dc4885-z4h9m_16dea17b-eaa4-4bbf-8895-c077b3e28d66/keystone-api/0.log" Mar 12 15:18:07 crc kubenswrapper[4778]: I0312 15:18:07.056003 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-dggmh_7596a69e-33c9-4a2b-89fc-e4c41252b3fd/neutron-httpd/0.log" Mar 12 15:18:07 crc kubenswrapper[4778]: I0312 15:18:07.294338 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-zx97x_8a67d4b7-d8eb-40f4-b51d-62e92c6042c1/neutron-httpd/0.log" Mar 12 15:18:07 crc kubenswrapper[4778]: I0312 15:18:07.483012 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-custom-edpm-deployment-openstack-edpm-ipawlfsg_5cc410de-5b42-44d1-8b29-37161475730e/neutron-metadata-custom-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:09 crc kubenswrapper[4778]: I0312 15:18:09.549047 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ec63cc68-6fde-419b-973c-91fc982e6a49/memcached/0.log" Mar 12 15:18:10 crc kubenswrapper[4778]: I0312 15:18:10.279583 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4/nova-api-log/0.log" Mar 12 15:18:11 crc kubenswrapper[4778]: I0312 15:18:11.689409 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-zx97x_8a67d4b7-d8eb-40f4-b51d-62e92c6042c1/neutron-api/0.log" Mar 12 15:18:12 crc kubenswrapper[4778]: I0312 15:18:12.491856 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-1_f0341d80-4327-4c9e-bc11-0cddbc6eab66/nova-api-log/0.log" Mar 12 15:18:12 crc kubenswrapper[4778]: I0312 15:18:12.958263 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_929bb450-949d-4f4f-9c21-de6c3fe32927/nova-cell0-conductor-conductor/0.log" Mar 12 15:18:13 crc kubenswrapper[4778]: I0312 15:18:13.277368 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1466aea3-fa10-49a6-a254-a96a52091aca/nova-cell1-conductor-conductor/0.log" Mar 12 15:18:13 crc kubenswrapper[4778]: I0312 15:18:13.369945 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_13b8e1df-5a8c-44de-b8e8-6c7efdb8bad4/nova-api-api/0.log" Mar 12 15:18:13 crc kubenswrapper[4778]: I0312 15:18:13.462260 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-566c4d5fc-dggmh_7596a69e-33c9-4a2b-89fc-e4c41252b3fd/neutron-api/0.log" Mar 12 15:18:13 crc kubenswrapper[4778]: I0312 15:18:13.536213 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-metadata-0_c289a520-78eb-433f-b7a4-0c03be917c18/nova-cell1-metadata-log/0.log" Mar 12 15:18:13 crc kubenswrapper[4778]: I0312 15:18:13.918925 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_2b43a8b1-b8bc-4ab5-af66-674fa7ff47d7/nova-cell1-novncproxy-novncproxy/0.log" Mar 12 15:18:13 crc kubenswrapper[4778]: I0312 15:18:13.943386 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-5tw6s_6ed77f87-e6b2-4c7a-8b0e-003106200dc8/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:14 crc kubenswrapper[4778]: I0312 15:18:14.254140 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fe52f8ba-9053-4733-b2e3-8f1becf437c8/mysql-bootstrap/0.log" Mar 12 15:18:14 crc kubenswrapper[4778]: I0312 15:18:14.409663 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fe52f8ba-9053-4733-b2e3-8f1becf437c8/mysql-bootstrap/0.log" Mar 12 15:18:14 crc kubenswrapper[4778]: I0312 15:18:14.473081 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fe52f8ba-9053-4733-b2e3-8f1becf437c8/galera/0.log" Mar 12 15:18:14 crc kubenswrapper[4778]: I0312 15:18:14.618158 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-metadata-0_c289a520-78eb-433f-b7a4-0c03be917c18/nova-cell1-metadata-metadata/0.log" Mar 12 15:18:14 crc kubenswrapper[4778]: I0312 15:18:14.724737 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_663feb48-0ed1-4947-97c3-e0bac206fdb2/mysql-bootstrap/0.log" Mar 12 15:18:14 crc kubenswrapper[4778]: I0312 15:18:14.771680 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-1_f0341d80-4327-4c9e-bc11-0cddbc6eab66/nova-api-api/0.log" Mar 12 15:18:14 crc kubenswrapper[4778]: I0312 15:18:14.973260 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_663feb48-0ed1-4947-97c3-e0bac206fdb2/galera/0.log" Mar 12 15:18:14 crc kubenswrapper[4778]: I0312 15:18:14.998656 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_856cd6d1-db21-4503-94d7-cbf27ca96cc2/openstackclient/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.001874 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_663feb48-0ed1-4947-97c3-e0bac206fdb2/mysql-bootstrap/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.158318 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f613745b-fe33-4918-9e0a-da2a59c55e33/nova-scheduler-scheduler/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.179754 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vtt4z_a8484e5d-6f77-407c-81db-0d9b2a6b37fd/openstack-network-exporter/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.191121 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4wct6_3b8efd1e-884d-4963-b69f-04ede0a92267/ovn-controller/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.403325 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovsdb-server-init/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.516769 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovsdb-server-init/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.547092 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovsdb-server/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.556572 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-p67vh_bd159b65-0c66-4809-949e-0f1babbaa8e6/ovs-vswitchd/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.658941 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-9lbdq_3c0a2200-506d-4ac3-b08c-9b3156c9e573/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.733621 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b25f9c9-784a-4a52-9bb3-02c6c4592702/openstack-network-exporter/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.760526 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b25f9c9-784a-4a52-9bb3-02c6c4592702/ovn-northd/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.870798 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7321e15e-673c-4e0d-80f8-6ac644c1940f/openstack-network-exporter/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.922795 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7321e15e-673c-4e0d-80f8-6ac644c1940f/ovsdbserver-nb/0.log" Mar 12 15:18:15 crc kubenswrapper[4778]: I0312 15:18:15.948248 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c951c6f-06fd-4793-a95b-26b5c1400d73/openstack-network-exporter/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.065681 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7c951c6f-06fd-4793-a95b-26b5c1400d73/ovsdbserver-sb/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.230588 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03/setup-container/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.539590 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03/setup-container/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.542133 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_629c84c5-e6cf-4aa7-ba9a-5a5fe7f53a03/rabbitmq/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.562978 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4d765698-l7bjx_267e7df2-d35c-45c4-af65-e8af31f8f6cf/placement-api/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.705963 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e89dfcc-2ac3-444c-91e8-56991eae096b/setup-container/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.722373 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-d4d765698-l7bjx_267e7df2-d35c-45c4-af65-e8af31f8f6cf/placement-log/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.894445 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e89dfcc-2ac3-444c-91e8-56991eae096b/setup-container/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.963198 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wcdkc_43a3ffe4-8b64-4e26-b63a-5254a986e4a4/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:16 crc kubenswrapper[4778]: I0312 15:18:16.987999 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1e89dfcc-2ac3-444c-91e8-56991eae096b/rabbitmq/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.099378 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6nfzc_bd7ac6b4-5600-45ce-b0ea-199dd4baefcb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.188483 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-gt58t_b0bb06df-44bb-4939-9492-a6ad3d6b5368/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.221667 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8mmjm_c993b33e-6c36-4524-864a-65da461a8e0c/ssh-known-hosts-edpm-deployment/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.461172 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f887c49f-fw2qd_bbd76cb8-462f-4e60-b755-ef3170e70d11/proxy-server/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.552510 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5knbg_2edc2c90-f91e-402d-809c-514e9d8a5e04/swift-ring-rebalance/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.671639 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f887c49f-fw2qd_bbd76cb8-462f-4e60-b755-ef3170e70d11/proxy-httpd/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.680468 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-auditor/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.691430 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-reaper/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.775822 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-replicator/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.855236 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/container-auditor/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.870381 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/account-server/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.934903 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/container-replicator/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.944820 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/container-server/0.log" Mar 12 15:18:17 crc kubenswrapper[4778]: I0312 15:18:17.993356 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/container-updater/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.089716 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-auditor/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.112357 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-expirer/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.170824 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-replicator/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.176219 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-server/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.198681 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/object-updater/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.293704 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/rsync/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.302630 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c01f943c-e09c-4727-8cf7-eec58a56b363/swift-recon-cron/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.432893 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qrk5s_2bfaafaf-36fb-4f1a-99ed-abb8b7bb4ae1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.555279 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_74897d0a-ca7b-4589-bd4c-75910c2d491c/tempest-tests-tempest-tests-runner/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.566997 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_82246f69-2112-44e9-a783-a4a5926188b4/test-operator-logs-container/0.log" Mar 12 15:18:18 crc kubenswrapper[4778]: I0312 15:18:18.651928 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9glvr_41583476-38cd-4c0d-a05a-96ddc5b330ca/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 12 15:18:26 crc kubenswrapper[4778]: I0312 15:18:26.867131 4778 scope.go:117] "RemoveContainer" containerID="971c448e63690dd43ac1d65335a70f73b2547d4337b42531c9336354c82b33f3" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.289576 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zmsmk"] Mar 12 15:18:39 crc kubenswrapper[4778]: E0312 15:18:39.292986 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="341f8c65-027d-48d0-b0c2-b843867c2413" containerName="oc" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.293145 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="341f8c65-027d-48d0-b0c2-b843867c2413" containerName="oc" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.293561 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="341f8c65-027d-48d0-b0c2-b843867c2413" containerName="oc" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.295731 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.330250 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmsmk"] Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.471522 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-utilities\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.471887 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-catalog-content\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.472041 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbmb8\" (UniqueName: \"kubernetes.io/projected/d07c4f66-dce0-41f8-8978-a88beb6bead5-kube-api-access-wbmb8\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.573765 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbmb8\" (UniqueName: \"kubernetes.io/projected/d07c4f66-dce0-41f8-8978-a88beb6bead5-kube-api-access-wbmb8\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.574736 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-utilities\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.575331 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-catalog-content\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.575240 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-utilities\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.575629 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-catalog-content\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.597805 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbmb8\" (UniqueName: \"kubernetes.io/projected/d07c4f66-dce0-41f8-8978-a88beb6bead5-kube-api-access-wbmb8\") pod \"certified-operators-zmsmk\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:39 crc kubenswrapper[4778]: I0312 15:18:39.618909 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:40 crc kubenswrapper[4778]: I0312 15:18:40.152974 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmsmk"] Mar 12 15:18:40 crc kubenswrapper[4778]: I0312 15:18:40.641577 4778 generic.go:334] "Generic (PLEG): container finished" podID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerID="7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f" exitCode=0 Mar 12 15:18:40 crc kubenswrapper[4778]: I0312 15:18:40.641689 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmsmk" event={"ID":"d07c4f66-dce0-41f8-8978-a88beb6bead5","Type":"ContainerDied","Data":"7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f"} Mar 12 15:18:40 crc kubenswrapper[4778]: I0312 15:18:40.641838 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmsmk" event={"ID":"d07c4f66-dce0-41f8-8978-a88beb6bead5","Type":"ContainerStarted","Data":"587d5bcfefe3b906ce3068b4e7ec750fe1f4761ad2d6b33acd8486e1cc6df24b"} Mar 12 15:18:41 crc kubenswrapper[4778]: I0312 15:18:41.612004 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/util/0.log" Mar 12 15:18:41 crc kubenswrapper[4778]: I0312 15:18:41.650961 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmsmk" event={"ID":"d07c4f66-dce0-41f8-8978-a88beb6bead5","Type":"ContainerStarted","Data":"b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2"} Mar 12 15:18:41 crc kubenswrapper[4778]: I0312 15:18:41.821131 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/pull/0.log" Mar 12 15:18:41 crc kubenswrapper[4778]: I0312 15:18:41.821208 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/util/0.log" Mar 12 15:18:41 crc kubenswrapper[4778]: I0312 15:18:41.822481 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/pull/0.log" Mar 12 15:18:42 crc kubenswrapper[4778]: I0312 15:18:42.018149 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/util/0.log" Mar 12 15:18:42 crc kubenswrapper[4778]: I0312 15:18:42.030692 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/extract/0.log" Mar 12 15:18:42 crc kubenswrapper[4778]: I0312 15:18:42.039147 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4d52f25c614d14ea9d555eaa2e62114c0c7d01977d30b495569716fcde5fmfr_e1d0ffee-229e-4da3-ac89-02bf6f6a439f/pull/0.log" Mar 12 15:18:42 crc kubenswrapper[4778]: I0312 15:18:42.444412 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-9n6jv_ad531191-d7c5-4ef6-9929-3a5869751d98/manager/0.log" Mar 12 15:18:42 crc kubenswrapper[4778]: I0312 15:18:42.796433 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-gknp2_db7f6b97-2903-44bf-803f-c00c337400b9/manager/0.log" Mar 12 15:18:43 crc kubenswrapper[4778]: I0312 15:18:43.071007 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-b7tkm_e290c1ea-a39d-451e-a24b-17a2b61ff6f0/manager/0.log" Mar 12 15:18:43 crc kubenswrapper[4778]: I0312 15:18:43.275409 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-4jgt8_4c2bf703-ecc1-4bb1-aa03-a64e55dfdb71/manager/0.log" Mar 12 15:18:43 crc kubenswrapper[4778]: I0312 15:18:43.671290 4778 generic.go:334] "Generic (PLEG): container finished" podID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerID="b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2" exitCode=0 Mar 12 15:18:43 crc kubenswrapper[4778]: I0312 15:18:43.671336 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmsmk" event={"ID":"d07c4f66-dce0-41f8-8978-a88beb6bead5","Type":"ContainerDied","Data":"b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2"} Mar 12 15:18:43 crc kubenswrapper[4778]: I0312 15:18:43.853836 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-5d6qz_02bc06ca-f4e6-4fde-bd5d-882714d9652c/manager/0.log" Mar 12 15:18:43 crc kubenswrapper[4778]: I0312 15:18:43.906876 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-qb8s8_98a4cfbd-3037-48b5-9047-5d574dcc0aca/manager/0.log" Mar 12 15:18:44 crc kubenswrapper[4778]: I0312 15:18:44.230755 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-7dxdh_7e02c37f-b9af-46c9-a743-03ead9b060db/manager/0.log" Mar 12 15:18:44 crc kubenswrapper[4778]: I0312 15:18:44.465704 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-pn8tk_5e38a4fd-95f8-437b-923b-eca33b1387e6/manager/0.log" Mar 12 15:18:44 crc kubenswrapper[4778]: I0312 15:18:44.681613 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmsmk" event={"ID":"d07c4f66-dce0-41f8-8978-a88beb6bead5","Type":"ContainerStarted","Data":"1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d"} Mar 12 15:18:44 crc kubenswrapper[4778]: I0312 15:18:44.710940 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zmsmk" podStartSLOduration=2.220049151 podStartE2EDuration="5.710917236s" podCreationTimestamp="2026-03-12 15:18:39 +0000 UTC" firstStartedPulling="2026-03-12 15:18:40.64417052 +0000 UTC m=+7739.092865916" lastFinishedPulling="2026-03-12 15:18:44.135038605 +0000 UTC m=+7742.583734001" observedRunningTime="2026-03-12 15:18:44.701038625 +0000 UTC m=+7743.149734081" watchObservedRunningTime="2026-03-12 15:18:44.710917236 +0000 UTC m=+7743.159612642" Mar 12 15:18:44 crc kubenswrapper[4778]: I0312 15:18:44.712151 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-jlbft_2d577800-0ee1-4fe5-a7fb-8794fb8c4c6f/manager/0.log" Mar 12 15:18:45 crc kubenswrapper[4778]: I0312 15:18:45.063946 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-dd2ft_076835c9-352b-4e40-80c4-3bce3bb80594/manager/0.log" Mar 12 15:18:45 crc kubenswrapper[4778]: I0312 15:18:45.449949 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-686d5f9fbd-vv9rc_d7288cc6-4247-4d03-bd37-9862243bf613/manager/0.log" Mar 12 15:18:45 crc kubenswrapper[4778]: I0312 15:18:45.584604 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-xm4cc_c8818ac0-af8b-42c9-a923-425fe79ed203/manager/0.log" Mar 12 15:18:45 crc kubenswrapper[4778]: I0312 15:18:45.606373 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-cdgg9_1a01d06c-be6f-45de-a22d-c8f1058a3a84/manager/0.log" Mar 12 15:18:45 crc kubenswrapper[4778]: I0312 15:18:45.903322 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-557ccf57b7qq9w6_4f7d316e-6896-4f84-8423-6f79778c1c6b/manager/0.log" Mar 12 15:18:46 crc kubenswrapper[4778]: I0312 15:18:46.052787 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bc4df7446-x9bsl_34bbdc16-4518-4ee5-9a70-3cedcc5f0159/operator/0.log" Mar 12 15:18:46 crc kubenswrapper[4778]: I0312 15:18:46.174449 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-b2fsv_748546a6-1355-470f-b8d0-de395cf3f681/registry-server/0.log" Mar 12 15:18:46 crc kubenswrapper[4778]: I0312 15:18:46.463083 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-bbgmb_8d38fd7e-6fa1-4b0c-9c82-9c57290c7837/manager/0.log" Mar 12 15:18:46 crc kubenswrapper[4778]: I0312 15:18:46.584085 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-wvpf8_52524252-25bd-49e5-822e-3d4668aff2f9/manager/0.log" Mar 12 15:18:46 crc kubenswrapper[4778]: I0312 15:18:46.736779 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-shf7b_034f39d8-a33e-4e37-bcde-51fb22debdd1/operator/0.log" Mar 12 15:18:46 crc kubenswrapper[4778]: I0312 15:18:46.943682 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-84mps_64a36384-f2e6-4077-b2ca-de2a6ce6ea06/manager/0.log" Mar 12 15:18:47 crc kubenswrapper[4778]: I0312 15:18:47.088412 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-gfv5z_6ad9bf9f-7214-44bc-a65d-1dcbf385fc2c/manager/0.log" Mar 12 15:18:47 crc kubenswrapper[4778]: I0312 15:18:47.155200 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-pcfrz_ed9b9271-4ae9-440a-9411-15d46267106e/manager/0.log" Mar 12 15:18:47 crc kubenswrapper[4778]: I0312 15:18:47.358551 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-2tjsk_8c02ecb8-0e15-4672-823a-c4437ca5bf8c/manager/0.log" Mar 12 15:18:47 crc kubenswrapper[4778]: I0312 15:18:47.684248 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5785b7957-7vdgw_d0784623-5f08-4109-9c7e-0a329210ce07/manager/0.log" Mar 12 15:18:49 crc kubenswrapper[4778]: I0312 15:18:49.619640 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:49 crc kubenswrapper[4778]: I0312 15:18:49.620650 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:49 crc kubenswrapper[4778]: I0312 15:18:49.672486 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:49 crc kubenswrapper[4778]: I0312 15:18:49.788815 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:49 crc kubenswrapper[4778]: I0312 15:18:49.903684 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmsmk"] Mar 12 15:18:51 crc kubenswrapper[4778]: I0312 15:18:51.749286 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zmsmk" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerName="registry-server" containerID="cri-o://1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d" gracePeriod=2 Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.269106 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.402366 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmb8\" (UniqueName: \"kubernetes.io/projected/d07c4f66-dce0-41f8-8978-a88beb6bead5-kube-api-access-wbmb8\") pod \"d07c4f66-dce0-41f8-8978-a88beb6bead5\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.402683 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-utilities\") pod \"d07c4f66-dce0-41f8-8978-a88beb6bead5\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.402789 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-catalog-content\") pod \"d07c4f66-dce0-41f8-8978-a88beb6bead5\" (UID: \"d07c4f66-dce0-41f8-8978-a88beb6bead5\") " Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.404110 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-utilities" (OuterVolumeSpecName: "utilities") pod "d07c4f66-dce0-41f8-8978-a88beb6bead5" (UID: "d07c4f66-dce0-41f8-8978-a88beb6bead5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.419789 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07c4f66-dce0-41f8-8978-a88beb6bead5-kube-api-access-wbmb8" (OuterVolumeSpecName: "kube-api-access-wbmb8") pod "d07c4f66-dce0-41f8-8978-a88beb6bead5" (UID: "d07c4f66-dce0-41f8-8978-a88beb6bead5"). InnerVolumeSpecName "kube-api-access-wbmb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.492958 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d07c4f66-dce0-41f8-8978-a88beb6bead5" (UID: "d07c4f66-dce0-41f8-8978-a88beb6bead5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.505558 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbmb8\" (UniqueName: \"kubernetes.io/projected/d07c4f66-dce0-41f8-8978-a88beb6bead5-kube-api-access-wbmb8\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.505586 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.505597 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d07c4f66-dce0-41f8-8978-a88beb6bead5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.759384 4778 generic.go:334] "Generic (PLEG): container finished" podID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerID="1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d" exitCode=0 Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.759424 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmsmk" event={"ID":"d07c4f66-dce0-41f8-8978-a88beb6bead5","Type":"ContainerDied","Data":"1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d"} Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.759450 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmsmk" event={"ID":"d07c4f66-dce0-41f8-8978-a88beb6bead5","Type":"ContainerDied","Data":"587d5bcfefe3b906ce3068b4e7ec750fe1f4761ad2d6b33acd8486e1cc6df24b"} Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.759465 4778 scope.go:117] "RemoveContainer" containerID="1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.759600 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmsmk" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.784439 4778 scope.go:117] "RemoveContainer" containerID="b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.795545 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmsmk"] Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.805035 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zmsmk"] Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.822434 4778 scope.go:117] "RemoveContainer" containerID="7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.864434 4778 scope.go:117] "RemoveContainer" containerID="1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d" Mar 12 15:18:52 crc kubenswrapper[4778]: E0312 15:18:52.865848 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d\": container with ID starting with 1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d not found: ID does not exist" containerID="1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.865890 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d"} err="failed to get container status \"1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d\": rpc error: code = NotFound desc = could not find container \"1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d\": container with ID starting with 1b60f78bfe6e75b52d9353c4adc4bbfd22877bf19966b144cae0b72bfb0cec7d not found: ID does not exist" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.866105 4778 scope.go:117] "RemoveContainer" containerID="b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2" Mar 12 15:18:52 crc kubenswrapper[4778]: E0312 15:18:52.866632 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2\": container with ID starting with b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2 not found: ID does not exist" containerID="b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.866685 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2"} err="failed to get container status \"b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2\": rpc error: code = NotFound desc = could not find container \"b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2\": container with ID starting with b8a4b13d36e704db61600a8494f1afac8230ce7b9ab5570b692d8a13cc92d8a2 not found: ID does not exist" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.866715 4778 scope.go:117] "RemoveContainer" containerID="7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f" Mar 12 15:18:52 crc kubenswrapper[4778]: E0312 15:18:52.867219 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f\": container with ID starting with 7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f not found: ID does not exist" containerID="7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f" Mar 12 15:18:52 crc kubenswrapper[4778]: I0312 15:18:52.867261 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f"} err="failed to get container status \"7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f\": rpc error: code = NotFound desc = could not find container \"7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f\": container with ID starting with 7237711c2c7b10d6207511e0a3ed4c19d27357e3abdfd78772a04042037f685f not found: ID does not exist" Mar 12 15:18:54 crc kubenswrapper[4778]: I0312 15:18:54.262328 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" path="/var/lib/kubelet/pods/d07c4f66-dce0-41f8-8978-a88beb6bead5/volumes" Mar 12 15:18:56 crc kubenswrapper[4778]: I0312 15:18:56.312983 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-6h2c2_ffb8a1f4-4533-4368-a900-95d37fe1d3ad/manager/0.log" Mar 12 15:18:58 crc kubenswrapper[4778]: I0312 15:18:58.557429 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:18:58 crc kubenswrapper[4778]: I0312 15:18:58.557721 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:19:08 crc kubenswrapper[4778]: I0312 15:19:08.090168 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zkrqr_f799c7e9-1c31-40bc-9ece-06a086683a98/control-plane-machine-set-operator/0.log" Mar 12 15:19:08 crc kubenswrapper[4778]: I0312 15:19:08.260606 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-242cb_e2967620-e2ce-4763-8a6c-e5a37f3a1f98/kube-rbac-proxy/0.log" Mar 12 15:19:08 crc kubenswrapper[4778]: I0312 15:19:08.293763 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-242cb_e2967620-e2ce-4763-8a6c-e5a37f3a1f98/machine-api-operator/0.log" Mar 12 15:19:21 crc kubenswrapper[4778]: I0312 15:19:21.019632 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2774s_92b29110-f478-42b5-9a5f-c9330a3973b2/cert-manager-controller/0.log" Mar 12 15:19:21 crc kubenswrapper[4778]: I0312 15:19:21.205069 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-jxs4g_804d0b09-6fab-4277-936a-5e0324d76b3e/cert-manager-cainjector/0.log" Mar 12 15:19:21 crc kubenswrapper[4778]: I0312 15:19:21.322568 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-ffh2x_45da07c5-bccb-4433-aa38-d9d2894f1b09/cert-manager-webhook/0.log" Mar 12 15:19:28 crc kubenswrapper[4778]: I0312 15:19:28.558000 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:19:28 crc kubenswrapper[4778]: I0312 15:19:28.559446 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:19:33 crc kubenswrapper[4778]: I0312 15:19:33.932314 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-jbxx4_af2d568b-9719-4da9-b0e8-e28d314ed860/nmstate-console-plugin/0.log" Mar 12 15:19:34 crc kubenswrapper[4778]: I0312 15:19:34.121135 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-rbsjl_d8309ffe-a26c-44a8-84e2-7b7ec10982a8/nmstate-handler/0.log" Mar 12 15:19:34 crc kubenswrapper[4778]: I0312 15:19:34.146027 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-b2s5h_7855d7b1-c7cf-4b63-9313-051a391fcf43/kube-rbac-proxy/0.log" Mar 12 15:19:34 crc kubenswrapper[4778]: I0312 15:19:34.210262 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-b2s5h_7855d7b1-c7cf-4b63-9313-051a391fcf43/nmstate-metrics/0.log" Mar 12 15:19:34 crc kubenswrapper[4778]: I0312 15:19:34.314360 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-hxzd6_fb85eef5-01f9-4fa6-b9d8-9606d04b8cd3/nmstate-operator/0.log" Mar 12 15:19:34 crc kubenswrapper[4778]: I0312 15:19:34.421205 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-94rbc_ef796a94-b10d-4d18-ae88-f64bc3a6b87d/nmstate-webhook/0.log" Mar 12 15:19:58 crc kubenswrapper[4778]: I0312 15:19:58.557346 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:19:58 crc kubenswrapper[4778]: I0312 15:19:58.557910 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:19:58 crc kubenswrapper[4778]: I0312 15:19:58.557977 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 15:19:58 crc kubenswrapper[4778]: I0312 15:19:58.558887 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:19:58 crc kubenswrapper[4778]: I0312 15:19:58.558984 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" gracePeriod=600 Mar 12 15:19:58 crc kubenswrapper[4778]: E0312 15:19:58.686173 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:19:59 crc kubenswrapper[4778]: I0312 15:19:59.369993 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" exitCode=0 Mar 12 15:19:59 crc kubenswrapper[4778]: I0312 15:19:59.370038 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc"} Mar 12 15:19:59 crc kubenswrapper[4778]: I0312 15:19:59.370353 4778 scope.go:117] "RemoveContainer" containerID="9c0ffa691d48b1023164222bd8c69a88e4e7a89d268ba03833dc6ae4ab4b44b3" Mar 12 15:19:59 crc kubenswrapper[4778]: I0312 15:19:59.371157 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:19:59 crc kubenswrapper[4778]: E0312 15:19:59.371478 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.149410 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555480-vwvb5"] Mar 12 15:20:00 crc kubenswrapper[4778]: E0312 15:20:00.149795 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerName="extract-utilities" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.149808 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerName="extract-utilities" Mar 12 15:20:00 crc kubenswrapper[4778]: E0312 15:20:00.149834 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerName="extract-content" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.149840 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerName="extract-content" Mar 12 15:20:00 crc kubenswrapper[4778]: E0312 15:20:00.149854 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerName="registry-server" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.149859 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerName="registry-server" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.150056 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07c4f66-dce0-41f8-8978-a88beb6bead5" containerName="registry-server" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.150694 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555480-vwvb5" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.153605 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.157287 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.157738 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.173545 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555480-vwvb5"] Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.262108 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5v8r\" (UniqueName: \"kubernetes.io/projected/9e32f842-16d7-484b-a241-e24ea8d3db45-kube-api-access-f5v8r\") pod \"auto-csr-approver-29555480-vwvb5\" (UID: \"9e32f842-16d7-484b-a241-e24ea8d3db45\") " pod="openshift-infra/auto-csr-approver-29555480-vwvb5" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.364554 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5v8r\" (UniqueName: \"kubernetes.io/projected/9e32f842-16d7-484b-a241-e24ea8d3db45-kube-api-access-f5v8r\") pod \"auto-csr-approver-29555480-vwvb5\" (UID: \"9e32f842-16d7-484b-a241-e24ea8d3db45\") " pod="openshift-infra/auto-csr-approver-29555480-vwvb5" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.385135 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5v8r\" (UniqueName: \"kubernetes.io/projected/9e32f842-16d7-484b-a241-e24ea8d3db45-kube-api-access-f5v8r\") pod \"auto-csr-approver-29555480-vwvb5\" (UID: \"9e32f842-16d7-484b-a241-e24ea8d3db45\") " pod="openshift-infra/auto-csr-approver-29555480-vwvb5" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.469272 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555480-vwvb5" Mar 12 15:20:00 crc kubenswrapper[4778]: I0312 15:20:00.976845 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555480-vwvb5"] Mar 12 15:20:01 crc kubenswrapper[4778]: I0312 15:20:01.402379 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555480-vwvb5" event={"ID":"9e32f842-16d7-484b-a241-e24ea8d3db45","Type":"ContainerStarted","Data":"e22dd6e038748741ff00db0c5d6a48f31b87069c92a6e6fb50d913d1489dd215"} Mar 12 15:20:03 crc kubenswrapper[4778]: I0312 15:20:03.427679 4778 generic.go:334] "Generic (PLEG): container finished" podID="9e32f842-16d7-484b-a241-e24ea8d3db45" containerID="7d34b9f856d96ac0b056ec93139556664e1582951a8e260f7792f49806a39777" exitCode=0 Mar 12 15:20:03 crc kubenswrapper[4778]: I0312 15:20:03.427776 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555480-vwvb5" event={"ID":"9e32f842-16d7-484b-a241-e24ea8d3db45","Type":"ContainerDied","Data":"7d34b9f856d96ac0b056ec93139556664e1582951a8e260f7792f49806a39777"} Mar 12 15:20:03 crc kubenswrapper[4778]: I0312 15:20:03.705097 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-mnjql_14351deb-3286-4464-8eac-6bb116a9ebce/kube-rbac-proxy/0.log" Mar 12 15:20:03 crc kubenswrapper[4778]: I0312 15:20:03.819534 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-mnjql_14351deb-3286-4464-8eac-6bb116a9ebce/controller/0.log" Mar 12 15:20:03 crc kubenswrapper[4778]: I0312 15:20:03.942892 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-x2n7f_2f214887-d638-42fa-aa86-1518cfae600d/frr-k8s-webhook-server/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.006523 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-frr-files/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.178359 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-metrics/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.237642 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-frr-files/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.242066 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-reloader/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.248212 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-reloader/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.379970 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-frr-files/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.420434 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-reloader/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.478305 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-metrics/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.500904 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-metrics/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.715458 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-reloader/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.722287 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/controller/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.761196 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-frr-files/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.767271 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/cp-metrics/0.log" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.781890 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555480-vwvb5" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.857254 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5v8r\" (UniqueName: \"kubernetes.io/projected/9e32f842-16d7-484b-a241-e24ea8d3db45-kube-api-access-f5v8r\") pod \"9e32f842-16d7-484b-a241-e24ea8d3db45\" (UID: \"9e32f842-16d7-484b-a241-e24ea8d3db45\") " Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.862106 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e32f842-16d7-484b-a241-e24ea8d3db45-kube-api-access-f5v8r" (OuterVolumeSpecName: "kube-api-access-f5v8r") pod "9e32f842-16d7-484b-a241-e24ea8d3db45" (UID: "9e32f842-16d7-484b-a241-e24ea8d3db45"). InnerVolumeSpecName "kube-api-access-f5v8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:20:04 crc kubenswrapper[4778]: I0312 15:20:04.959426 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5v8r\" (UniqueName: \"kubernetes.io/projected/9e32f842-16d7-484b-a241-e24ea8d3db45-kube-api-access-f5v8r\") on node \"crc\" DevicePath \"\"" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.002925 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/kube-rbac-proxy/0.log" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.003083 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/kube-rbac-proxy-frr/0.log" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.026431 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/frr-metrics/0.log" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.238298 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/reloader/0.log" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.294924 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-54d5c4b6c7-gh4lx_a5a6d344-0a75-422d-acd9-fe8887b03110/manager/0.log" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.443620 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-68f5db54d6-zstmq_6ac207b6-1710-47af-8fe9-b0c3adbce0ab/webhook-server/0.log" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.443995 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555480-vwvb5" event={"ID":"9e32f842-16d7-484b-a241-e24ea8d3db45","Type":"ContainerDied","Data":"e22dd6e038748741ff00db0c5d6a48f31b87069c92a6e6fb50d913d1489dd215"} Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.444086 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22dd6e038748741ff00db0c5d6a48f31b87069c92a6e6fb50d913d1489dd215" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.444039 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555480-vwvb5" Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.856604 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555474-s5qjz"] Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.863964 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555474-s5qjz"] Mar 12 15:20:05 crc kubenswrapper[4778]: I0312 15:20:05.867248 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k7nvk_f2e1d11e-8f27-498d-8d45-ac0e14a796fe/kube-rbac-proxy/0.log" Mar 12 15:20:06 crc kubenswrapper[4778]: I0312 15:20:06.269147 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0be289e-351f-4101-acbd-0127a4b295dc" path="/var/lib/kubelet/pods/c0be289e-351f-4101-acbd-0127a4b295dc/volumes" Mar 12 15:20:06 crc kubenswrapper[4778]: I0312 15:20:06.544340 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k7nvk_f2e1d11e-8f27-498d-8d45-ac0e14a796fe/speaker/0.log" Mar 12 15:20:07 crc kubenswrapper[4778]: I0312 15:20:07.180496 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zxv5p_b5f035ed-2e64-4000-908f-6d0ecab1fe8d/frr/0.log" Mar 12 15:20:10 crc kubenswrapper[4778]: I0312 15:20:10.254124 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:20:10 crc kubenswrapper[4778]: E0312 15:20:10.254854 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.019133 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/util/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.178647 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/util/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.192672 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/pull/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.256433 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/pull/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.403920 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/pull/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.417533 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/util/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.420121 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874wxjhd_cb93062b-8387-4eb4-8662-ecaf93146d85/extract/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.556584 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/util/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.745547 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/util/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.754090 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/pull/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.757283 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/pull/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.918686 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/pull/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.940126 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/util/0.log" Mar 12 15:20:20 crc kubenswrapper[4778]: I0312 15:20:20.963018 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1rdvw6_9090029d-2f37-457b-8425-3690da177434/extract/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.082808 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-utilities/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.253842 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:20:21 crc kubenswrapper[4778]: E0312 15:20:21.254134 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.393616 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-content/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.403449 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-utilities/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.425368 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-content/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.472293 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-utilities/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.571841 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/extract-content/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.686801 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-utilities/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.909823 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-content/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.929169 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-utilities/0.log" Mar 12 15:20:21 crc kubenswrapper[4778]: I0312 15:20:21.990866 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-content/0.log" Mar 12 15:20:22 crc kubenswrapper[4778]: I0312 15:20:22.128788 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-utilities/0.log" Mar 12 15:20:22 crc kubenswrapper[4778]: I0312 15:20:22.141127 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/extract-content/0.log" Mar 12 15:20:22 crc kubenswrapper[4778]: I0312 15:20:22.395892 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hvmk8_3b062c23-5acd-430d-aa6c-24b48a725594/marketplace-operator/0.log" Mar 12 15:20:22 crc kubenswrapper[4778]: I0312 15:20:22.621507 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-utilities/0.log" Mar 12 15:20:22 crc kubenswrapper[4778]: I0312 15:20:22.838348 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-utilities/0.log" Mar 12 15:20:22 crc kubenswrapper[4778]: I0312 15:20:22.891013 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-content/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.021234 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bthl5_9098edbc-6c4b-444b-8214-5848756ec94b/registry-server/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.086728 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-content/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.120653 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fhcbf_b5b1dff9-c32b-4a91-863c-10b5ea4bc4ef/registry-server/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.217481 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-content/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.248646 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/extract-utilities/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.427620 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-utilities/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.604030 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k57lm_1d67fa18-822d-4685-a7a1-5b8b8c39c96a/registry-server/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.624564 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-content/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.640648 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-utilities/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.657353 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-content/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.844948 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-utilities/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.846469 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/extract-content/0.log" Mar 12 15:20:23 crc kubenswrapper[4778]: I0312 15:20:23.999484 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lvp8p_ca67e14c-855d-473a-99b0-fe9dabb57916/registry-server/0.log" Mar 12 15:20:26 crc kubenswrapper[4778]: I0312 15:20:26.973824 4778 scope.go:117] "RemoveContainer" containerID="deb89f96ad2640fa0674d82f73344504fdcc846f9e4815ae8eef2ce9a216dca5" Mar 12 15:20:35 crc kubenswrapper[4778]: I0312 15:20:35.254348 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:20:35 crc kubenswrapper[4778]: E0312 15:20:35.256584 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:20:46 crc kubenswrapper[4778]: I0312 15:20:46.254319 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:20:46 crc kubenswrapper[4778]: E0312 15:20:46.255009 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:20:47 crc kubenswrapper[4778]: E0312 15:20:47.195429 4778 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.32:34204->38.129.56.32:35979: write tcp 38.129.56.32:34204->38.129.56.32:35979: write: broken pipe Mar 12 15:20:47 crc kubenswrapper[4778]: E0312 15:20:47.349766 4778 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.32:34226->38.129.56.32:35979: write tcp 38.129.56.32:34226->38.129.56.32:35979: write: broken pipe Mar 12 15:21:00 crc kubenswrapper[4778]: I0312 15:21:00.255307 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:21:00 crc kubenswrapper[4778]: E0312 15:21:00.256076 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:21:11 crc kubenswrapper[4778]: I0312 15:21:11.255002 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:21:11 crc kubenswrapper[4778]: E0312 15:21:11.255957 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:21:23 crc kubenswrapper[4778]: I0312 15:21:23.253941 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:21:23 crc kubenswrapper[4778]: E0312 15:21:23.254699 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:21:35 crc kubenswrapper[4778]: I0312 15:21:35.255376 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:21:35 crc kubenswrapper[4778]: E0312 15:21:35.256171 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:21:48 crc kubenswrapper[4778]: I0312 15:21:48.257067 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:21:48 crc kubenswrapper[4778]: E0312 15:21:48.257987 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.170798 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555482-j5w56"] Mar 12 15:22:00 crc kubenswrapper[4778]: E0312 15:22:00.171700 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e32f842-16d7-484b-a241-e24ea8d3db45" containerName="oc" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.171711 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e32f842-16d7-484b-a241-e24ea8d3db45" containerName="oc" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.171894 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e32f842-16d7-484b-a241-e24ea8d3db45" containerName="oc" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.172691 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555482-j5w56" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.175858 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.176503 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.177086 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.188304 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555482-j5w56"] Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.314458 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn692\" (UniqueName: \"kubernetes.io/projected/343bef0b-4527-4d4b-a357-aa48cf3cbe98-kube-api-access-fn692\") pod \"auto-csr-approver-29555482-j5w56\" (UID: \"343bef0b-4527-4d4b-a357-aa48cf3cbe98\") " pod="openshift-infra/auto-csr-approver-29555482-j5w56" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.416810 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn692\" (UniqueName: \"kubernetes.io/projected/343bef0b-4527-4d4b-a357-aa48cf3cbe98-kube-api-access-fn692\") pod \"auto-csr-approver-29555482-j5w56\" (UID: \"343bef0b-4527-4d4b-a357-aa48cf3cbe98\") " pod="openshift-infra/auto-csr-approver-29555482-j5w56" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.452110 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn692\" (UniqueName: \"kubernetes.io/projected/343bef0b-4527-4d4b-a357-aa48cf3cbe98-kube-api-access-fn692\") pod \"auto-csr-approver-29555482-j5w56\" (UID: \"343bef0b-4527-4d4b-a357-aa48cf3cbe98\") " pod="openshift-infra/auto-csr-approver-29555482-j5w56" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.505104 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555482-j5w56" Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.966995 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555482-j5w56"] Mar 12 15:22:00 crc kubenswrapper[4778]: I0312 15:22:00.978583 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:22:01 crc kubenswrapper[4778]: I0312 15:22:01.660555 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555482-j5w56" event={"ID":"343bef0b-4527-4d4b-a357-aa48cf3cbe98","Type":"ContainerStarted","Data":"e0962b891e1a4f0b4147284e86bd94c320819fd05c0bc84f20fdfdf90831deb3"} Mar 12 15:22:02 crc kubenswrapper[4778]: I0312 15:22:02.674621 4778 generic.go:334] "Generic (PLEG): container finished" podID="343bef0b-4527-4d4b-a357-aa48cf3cbe98" containerID="5dbc873ca44737bada83a756ec7434fa60be6ec95b9ed80179e9560e37fb59ca" exitCode=0 Mar 12 15:22:02 crc kubenswrapper[4778]: I0312 15:22:02.674871 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555482-j5w56" event={"ID":"343bef0b-4527-4d4b-a357-aa48cf3cbe98","Type":"ContainerDied","Data":"5dbc873ca44737bada83a756ec7434fa60be6ec95b9ed80179e9560e37fb59ca"} Mar 12 15:22:03 crc kubenswrapper[4778]: I0312 15:22:03.254049 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:22:03 crc kubenswrapper[4778]: E0312 15:22:03.254604 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:22:04 crc kubenswrapper[4778]: I0312 15:22:04.127378 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555482-j5w56" Mar 12 15:22:04 crc kubenswrapper[4778]: I0312 15:22:04.307419 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn692\" (UniqueName: \"kubernetes.io/projected/343bef0b-4527-4d4b-a357-aa48cf3cbe98-kube-api-access-fn692\") pod \"343bef0b-4527-4d4b-a357-aa48cf3cbe98\" (UID: \"343bef0b-4527-4d4b-a357-aa48cf3cbe98\") " Mar 12 15:22:04 crc kubenswrapper[4778]: I0312 15:22:04.317853 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343bef0b-4527-4d4b-a357-aa48cf3cbe98-kube-api-access-fn692" (OuterVolumeSpecName: "kube-api-access-fn692") pod "343bef0b-4527-4d4b-a357-aa48cf3cbe98" (UID: "343bef0b-4527-4d4b-a357-aa48cf3cbe98"). InnerVolumeSpecName "kube-api-access-fn692". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:22:04 crc kubenswrapper[4778]: I0312 15:22:04.412879 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn692\" (UniqueName: \"kubernetes.io/projected/343bef0b-4527-4d4b-a357-aa48cf3cbe98-kube-api-access-fn692\") on node \"crc\" DevicePath \"\"" Mar 12 15:22:04 crc kubenswrapper[4778]: I0312 15:22:04.700099 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555482-j5w56" event={"ID":"343bef0b-4527-4d4b-a357-aa48cf3cbe98","Type":"ContainerDied","Data":"e0962b891e1a4f0b4147284e86bd94c320819fd05c0bc84f20fdfdf90831deb3"} Mar 12 15:22:04 crc kubenswrapper[4778]: I0312 15:22:04.700151 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0962b891e1a4f0b4147284e86bd94c320819fd05c0bc84f20fdfdf90831deb3" Mar 12 15:22:04 crc kubenswrapper[4778]: I0312 15:22:04.700216 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555482-j5w56" Mar 12 15:22:05 crc kubenswrapper[4778]: I0312 15:22:05.212466 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555476-5dkdj"] Mar 12 15:22:05 crc kubenswrapper[4778]: I0312 15:22:05.223785 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555476-5dkdj"] Mar 12 15:22:06 crc kubenswrapper[4778]: I0312 15:22:06.267913 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42af641-c33e-4b80-899f-98e5d4e78dad" path="/var/lib/kubelet/pods/e42af641-c33e-4b80-899f-98e5d4e78dad/volumes" Mar 12 15:22:14 crc kubenswrapper[4778]: I0312 15:22:14.255084 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:22:14 crc kubenswrapper[4778]: E0312 15:22:14.256003 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:22:26 crc kubenswrapper[4778]: I0312 15:22:26.253734 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:22:26 crc kubenswrapper[4778]: E0312 15:22:26.254583 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:22:27 crc kubenswrapper[4778]: I0312 15:22:27.080941 4778 scope.go:117] "RemoveContainer" containerID="9e696920a26a473d829a12f6bf276893531dc1bd498cf28bdf23c8b663c144ee" Mar 12 15:22:38 crc kubenswrapper[4778]: I0312 15:22:38.255590 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:22:38 crc kubenswrapper[4778]: E0312 15:22:38.258603 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:22:53 crc kubenswrapper[4778]: I0312 15:22:53.254146 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:22:53 crc kubenswrapper[4778]: E0312 15:22:53.255218 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:23:04 crc kubenswrapper[4778]: I0312 15:23:04.254281 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:23:04 crc kubenswrapper[4778]: E0312 15:23:04.255341 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:23:05 crc kubenswrapper[4778]: I0312 15:23:05.433116 4778 generic.go:334] "Generic (PLEG): container finished" podID="050e068e-c05a-4115-8a20-381ecb7747c6" containerID="91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346" exitCode=0 Mar 12 15:23:05 crc kubenswrapper[4778]: I0312 15:23:05.433317 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" event={"ID":"050e068e-c05a-4115-8a20-381ecb7747c6","Type":"ContainerDied","Data":"91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346"} Mar 12 15:23:05 crc kubenswrapper[4778]: I0312 15:23:05.434208 4778 scope.go:117] "RemoveContainer" containerID="91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346" Mar 12 15:23:06 crc kubenswrapper[4778]: I0312 15:23:06.299020 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t8fhv_must-gather-wpn7c_050e068e-c05a-4115-8a20-381ecb7747c6/gather/0.log" Mar 12 15:23:15 crc kubenswrapper[4778]: I0312 15:23:15.254073 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:23:15 crc kubenswrapper[4778]: E0312 15:23:15.255134 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:23:23 crc kubenswrapper[4778]: I0312 15:23:23.573387 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-t8fhv/must-gather-wpn7c"] Mar 12 15:23:23 crc kubenswrapper[4778]: I0312 15:23:23.575581 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" podUID="050e068e-c05a-4115-8a20-381ecb7747c6" containerName="copy" containerID="cri-o://115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3" gracePeriod=2 Mar 12 15:23:23 crc kubenswrapper[4778]: I0312 15:23:23.587644 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-t8fhv/must-gather-wpn7c"] Mar 12 15:23:23 crc kubenswrapper[4778]: I0312 15:23:23.987882 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t8fhv_must-gather-wpn7c_050e068e-c05a-4115-8a20-381ecb7747c6/copy/0.log" Mar 12 15:23:23 crc kubenswrapper[4778]: I0312 15:23:23.988806 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.147951 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bskd9\" (UniqueName: \"kubernetes.io/projected/050e068e-c05a-4115-8a20-381ecb7747c6-kube-api-access-bskd9\") pod \"050e068e-c05a-4115-8a20-381ecb7747c6\" (UID: \"050e068e-c05a-4115-8a20-381ecb7747c6\") " Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.148804 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/050e068e-c05a-4115-8a20-381ecb7747c6-must-gather-output\") pod \"050e068e-c05a-4115-8a20-381ecb7747c6\" (UID: \"050e068e-c05a-4115-8a20-381ecb7747c6\") " Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.158551 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050e068e-c05a-4115-8a20-381ecb7747c6-kube-api-access-bskd9" (OuterVolumeSpecName: "kube-api-access-bskd9") pod "050e068e-c05a-4115-8a20-381ecb7747c6" (UID: "050e068e-c05a-4115-8a20-381ecb7747c6"). InnerVolumeSpecName "kube-api-access-bskd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.252788 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bskd9\" (UniqueName: \"kubernetes.io/projected/050e068e-c05a-4115-8a20-381ecb7747c6-kube-api-access-bskd9\") on node \"crc\" DevicePath \"\"" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.452739 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050e068e-c05a-4115-8a20-381ecb7747c6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "050e068e-c05a-4115-8a20-381ecb7747c6" (UID: "050e068e-c05a-4115-8a20-381ecb7747c6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.457024 4778 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/050e068e-c05a-4115-8a20-381ecb7747c6-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.650638 4778 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-t8fhv_must-gather-wpn7c_050e068e-c05a-4115-8a20-381ecb7747c6/copy/0.log" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.651015 4778 generic.go:334] "Generic (PLEG): container finished" podID="050e068e-c05a-4115-8a20-381ecb7747c6" containerID="115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3" exitCode=143 Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.651069 4778 scope.go:117] "RemoveContainer" containerID="115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.651251 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-t8fhv/must-gather-wpn7c" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.686346 4778 scope.go:117] "RemoveContainer" containerID="91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.763086 4778 scope.go:117] "RemoveContainer" containerID="115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3" Mar 12 15:23:24 crc kubenswrapper[4778]: E0312 15:23:24.763554 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3\": container with ID starting with 115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3 not found: ID does not exist" containerID="115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.763587 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3"} err="failed to get container status \"115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3\": rpc error: code = NotFound desc = could not find container \"115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3\": container with ID starting with 115cc12adee3c5d75407f3615123df70b34b4ee2bb750778748ea73d75b1e2c3 not found: ID does not exist" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.763608 4778 scope.go:117] "RemoveContainer" containerID="91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346" Mar 12 15:23:24 crc kubenswrapper[4778]: E0312 15:23:24.763891 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346\": container with ID starting with 91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346 not found: ID does not exist" containerID="91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346" Mar 12 15:23:24 crc kubenswrapper[4778]: I0312 15:23:24.763912 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346"} err="failed to get container status \"91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346\": rpc error: code = NotFound desc = could not find container \"91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346\": container with ID starting with 91a765bb1f0c8a38e71fffab266d01f82b17250fe0665f225840c64771ac6346 not found: ID does not exist" Mar 12 15:23:26 crc kubenswrapper[4778]: I0312 15:23:26.266716 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050e068e-c05a-4115-8a20-381ecb7747c6" path="/var/lib/kubelet/pods/050e068e-c05a-4115-8a20-381ecb7747c6/volumes" Mar 12 15:23:27 crc kubenswrapper[4778]: I0312 15:23:27.192358 4778 scope.go:117] "RemoveContainer" containerID="bbbdcd9b6771dbb5cbdf7ae29f037ff7ca335ae97a393f2a8c10c715cd6d06ac" Mar 12 15:23:30 crc kubenswrapper[4778]: I0312 15:23:30.255414 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:23:30 crc kubenswrapper[4778]: E0312 15:23:30.256200 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:23:45 crc kubenswrapper[4778]: I0312 15:23:45.254311 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:23:45 crc kubenswrapper[4778]: E0312 15:23:45.255774 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.503821 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5b52b"] Mar 12 15:23:49 crc kubenswrapper[4778]: E0312 15:23:49.505301 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050e068e-c05a-4115-8a20-381ecb7747c6" containerName="gather" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.505332 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="050e068e-c05a-4115-8a20-381ecb7747c6" containerName="gather" Mar 12 15:23:49 crc kubenswrapper[4778]: E0312 15:23:49.505371 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050e068e-c05a-4115-8a20-381ecb7747c6" containerName="copy" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.505384 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="050e068e-c05a-4115-8a20-381ecb7747c6" containerName="copy" Mar 12 15:23:49 crc kubenswrapper[4778]: E0312 15:23:49.505412 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343bef0b-4527-4d4b-a357-aa48cf3cbe98" containerName="oc" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.505428 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="343bef0b-4527-4d4b-a357-aa48cf3cbe98" containerName="oc" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.505862 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="050e068e-c05a-4115-8a20-381ecb7747c6" containerName="gather" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.505902 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="050e068e-c05a-4115-8a20-381ecb7747c6" containerName="copy" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.505935 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="343bef0b-4527-4d4b-a357-aa48cf3cbe98" containerName="oc" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.509227 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.517111 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5b52b"] Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.694814 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0302835-ac50-40ca-bc24-02064f6720c0-utilities\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.695156 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldzq\" (UniqueName: \"kubernetes.io/projected/d0302835-ac50-40ca-bc24-02064f6720c0-kube-api-access-bldzq\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.695210 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0302835-ac50-40ca-bc24-02064f6720c0-catalog-content\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.796648 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0302835-ac50-40ca-bc24-02064f6720c0-utilities\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.796793 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bldzq\" (UniqueName: \"kubernetes.io/projected/d0302835-ac50-40ca-bc24-02064f6720c0-kube-api-access-bldzq\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.796826 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0302835-ac50-40ca-bc24-02064f6720c0-catalog-content\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.797104 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0302835-ac50-40ca-bc24-02064f6720c0-utilities\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.797299 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0302835-ac50-40ca-bc24-02064f6720c0-catalog-content\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.820802 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bldzq\" (UniqueName: \"kubernetes.io/projected/d0302835-ac50-40ca-bc24-02064f6720c0-kube-api-access-bldzq\") pod \"community-operators-5b52b\" (UID: \"d0302835-ac50-40ca-bc24-02064f6720c0\") " pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:49 crc kubenswrapper[4778]: I0312 15:23:49.853781 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:23:50 crc kubenswrapper[4778]: I0312 15:23:50.403259 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5b52b"] Mar 12 15:23:50 crc kubenswrapper[4778]: W0312 15:23:50.405386 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0302835_ac50_40ca_bc24_02064f6720c0.slice/crio-54a89ef0cd8b4fb093d17920ff45980516ffe111e030649f871975b6031a49aa WatchSource:0}: Error finding container 54a89ef0cd8b4fb093d17920ff45980516ffe111e030649f871975b6031a49aa: Status 404 returned error can't find the container with id 54a89ef0cd8b4fb093d17920ff45980516ffe111e030649f871975b6031a49aa Mar 12 15:23:51 crc kubenswrapper[4778]: I0312 15:23:51.229994 4778 generic.go:334] "Generic (PLEG): container finished" podID="d0302835-ac50-40ca-bc24-02064f6720c0" containerID="658f3579c17d5dce563f24c175a91939dba1f7d999430cb2ca1ce286f45d2937" exitCode=0 Mar 12 15:23:51 crc kubenswrapper[4778]: I0312 15:23:51.230062 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b52b" event={"ID":"d0302835-ac50-40ca-bc24-02064f6720c0","Type":"ContainerDied","Data":"658f3579c17d5dce563f24c175a91939dba1f7d999430cb2ca1ce286f45d2937"} Mar 12 15:23:51 crc kubenswrapper[4778]: I0312 15:23:51.230296 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b52b" event={"ID":"d0302835-ac50-40ca-bc24-02064f6720c0","Type":"ContainerStarted","Data":"54a89ef0cd8b4fb093d17920ff45980516ffe111e030649f871975b6031a49aa"} Mar 12 15:23:59 crc kubenswrapper[4778]: I0312 15:23:59.324068 4778 generic.go:334] "Generic (PLEG): container finished" podID="d0302835-ac50-40ca-bc24-02064f6720c0" containerID="0439ee2cdec34f7a395d09cccb8281543088efaa6eab1fd3af09a3ffa44df02e" exitCode=0 Mar 12 15:23:59 crc kubenswrapper[4778]: I0312 15:23:59.324226 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b52b" event={"ID":"d0302835-ac50-40ca-bc24-02064f6720c0","Type":"ContainerDied","Data":"0439ee2cdec34f7a395d09cccb8281543088efaa6eab1fd3af09a3ffa44df02e"} Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.174222 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555484-jwvnw"] Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.175487 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555484-jwvnw" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.180491 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.180527 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.180754 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.186754 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555484-jwvnw"] Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.254168 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:24:00 crc kubenswrapper[4778]: E0312 15:24:00.254447 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.304500 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89vlq\" (UniqueName: \"kubernetes.io/projected/146dd143-ec48-4ae2-9989-08072e1c770f-kube-api-access-89vlq\") pod \"auto-csr-approver-29555484-jwvnw\" (UID: \"146dd143-ec48-4ae2-9989-08072e1c770f\") " pod="openshift-infra/auto-csr-approver-29555484-jwvnw" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.339990 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5b52b" event={"ID":"d0302835-ac50-40ca-bc24-02064f6720c0","Type":"ContainerStarted","Data":"e8daf4e328623f8f71fbf304d484442e97e1f45de64307cc9e97d44dc3c43211"} Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.370930 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5b52b" podStartSLOduration=2.799641571 podStartE2EDuration="11.370901783s" podCreationTimestamp="2026-03-12 15:23:49 +0000 UTC" firstStartedPulling="2026-03-12 15:23:51.232251491 +0000 UTC m=+8049.680946887" lastFinishedPulling="2026-03-12 15:23:59.803511703 +0000 UTC m=+8058.252207099" observedRunningTime="2026-03-12 15:24:00.359489118 +0000 UTC m=+8058.808184564" watchObservedRunningTime="2026-03-12 15:24:00.370901783 +0000 UTC m=+8058.819597169" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.407610 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89vlq\" (UniqueName: \"kubernetes.io/projected/146dd143-ec48-4ae2-9989-08072e1c770f-kube-api-access-89vlq\") pod \"auto-csr-approver-29555484-jwvnw\" (UID: \"146dd143-ec48-4ae2-9989-08072e1c770f\") " pod="openshift-infra/auto-csr-approver-29555484-jwvnw" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.430636 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89vlq\" (UniqueName: \"kubernetes.io/projected/146dd143-ec48-4ae2-9989-08072e1c770f-kube-api-access-89vlq\") pod \"auto-csr-approver-29555484-jwvnw\" (UID: \"146dd143-ec48-4ae2-9989-08072e1c770f\") " pod="openshift-infra/auto-csr-approver-29555484-jwvnw" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.497569 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555484-jwvnw" Mar 12 15:24:00 crc kubenswrapper[4778]: I0312 15:24:00.970171 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555484-jwvnw"] Mar 12 15:24:01 crc kubenswrapper[4778]: I0312 15:24:01.352291 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555484-jwvnw" event={"ID":"146dd143-ec48-4ae2-9989-08072e1c770f","Type":"ContainerStarted","Data":"9150841d8ba3438fe4bc2bce5a696cc25787ef9bf5844155381b66cfc43c2fa8"} Mar 12 15:24:03 crc kubenswrapper[4778]: I0312 15:24:03.375070 4778 generic.go:334] "Generic (PLEG): container finished" podID="146dd143-ec48-4ae2-9989-08072e1c770f" containerID="d3066e8a82263799c1c48ebb3267168ff31eda2585e942011782a116e2d5918f" exitCode=0 Mar 12 15:24:03 crc kubenswrapper[4778]: I0312 15:24:03.375130 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555484-jwvnw" event={"ID":"146dd143-ec48-4ae2-9989-08072e1c770f","Type":"ContainerDied","Data":"d3066e8a82263799c1c48ebb3267168ff31eda2585e942011782a116e2d5918f"} Mar 12 15:24:04 crc kubenswrapper[4778]: I0312 15:24:04.753002 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555484-jwvnw" Mar 12 15:24:04 crc kubenswrapper[4778]: I0312 15:24:04.911928 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89vlq\" (UniqueName: \"kubernetes.io/projected/146dd143-ec48-4ae2-9989-08072e1c770f-kube-api-access-89vlq\") pod \"146dd143-ec48-4ae2-9989-08072e1c770f\" (UID: \"146dd143-ec48-4ae2-9989-08072e1c770f\") " Mar 12 15:24:04 crc kubenswrapper[4778]: I0312 15:24:04.918807 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146dd143-ec48-4ae2-9989-08072e1c770f-kube-api-access-89vlq" (OuterVolumeSpecName: "kube-api-access-89vlq") pod "146dd143-ec48-4ae2-9989-08072e1c770f" (UID: "146dd143-ec48-4ae2-9989-08072e1c770f"). InnerVolumeSpecName "kube-api-access-89vlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:24:05 crc kubenswrapper[4778]: I0312 15:24:05.014212 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89vlq\" (UniqueName: \"kubernetes.io/projected/146dd143-ec48-4ae2-9989-08072e1c770f-kube-api-access-89vlq\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:05 crc kubenswrapper[4778]: I0312 15:24:05.402845 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555484-jwvnw" event={"ID":"146dd143-ec48-4ae2-9989-08072e1c770f","Type":"ContainerDied","Data":"9150841d8ba3438fe4bc2bce5a696cc25787ef9bf5844155381b66cfc43c2fa8"} Mar 12 15:24:05 crc kubenswrapper[4778]: I0312 15:24:05.403232 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9150841d8ba3438fe4bc2bce5a696cc25787ef9bf5844155381b66cfc43c2fa8" Mar 12 15:24:05 crc kubenswrapper[4778]: I0312 15:24:05.402912 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555484-jwvnw" Mar 12 15:24:05 crc kubenswrapper[4778]: I0312 15:24:05.836821 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555478-fcjb2"] Mar 12 15:24:05 crc kubenswrapper[4778]: I0312 15:24:05.848472 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555478-fcjb2"] Mar 12 15:24:06 crc kubenswrapper[4778]: I0312 15:24:06.274054 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="341f8c65-027d-48d0-b0c2-b843867c2413" path="/var/lib/kubelet/pods/341f8c65-027d-48d0-b0c2-b843867c2413/volumes" Mar 12 15:24:09 crc kubenswrapper[4778]: I0312 15:24:09.854722 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:24:09 crc kubenswrapper[4778]: I0312 15:24:09.855142 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:24:09 crc kubenswrapper[4778]: I0312 15:24:09.909887 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:24:10 crc kubenswrapper[4778]: I0312 15:24:10.537632 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5b52b" Mar 12 15:24:10 crc kubenswrapper[4778]: I0312 15:24:10.641774 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5b52b"] Mar 12 15:24:10 crc kubenswrapper[4778]: I0312 15:24:10.721583 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bthl5"] Mar 12 15:24:10 crc kubenswrapper[4778]: I0312 15:24:10.722439 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bthl5" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" containerName="registry-server" containerID="cri-o://db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64" gracePeriod=2 Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.189688 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bthl5" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.254588 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:24:11 crc kubenswrapper[4778]: E0312 15:24:11.254858 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.361260 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-catalog-content\") pod \"9098edbc-6c4b-444b-8214-5848756ec94b\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.361374 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2cq2\" (UniqueName: \"kubernetes.io/projected/9098edbc-6c4b-444b-8214-5848756ec94b-kube-api-access-p2cq2\") pod \"9098edbc-6c4b-444b-8214-5848756ec94b\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.361476 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-utilities\") pod \"9098edbc-6c4b-444b-8214-5848756ec94b\" (UID: \"9098edbc-6c4b-444b-8214-5848756ec94b\") " Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.365781 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-utilities" (OuterVolumeSpecName: "utilities") pod "9098edbc-6c4b-444b-8214-5848756ec94b" (UID: "9098edbc-6c4b-444b-8214-5848756ec94b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.373592 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9098edbc-6c4b-444b-8214-5848756ec94b-kube-api-access-p2cq2" (OuterVolumeSpecName: "kube-api-access-p2cq2") pod "9098edbc-6c4b-444b-8214-5848756ec94b" (UID: "9098edbc-6c4b-444b-8214-5848756ec94b"). InnerVolumeSpecName "kube-api-access-p2cq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.423794 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9098edbc-6c4b-444b-8214-5848756ec94b" (UID: "9098edbc-6c4b-444b-8214-5848756ec94b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.465679 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.465747 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2cq2\" (UniqueName: \"kubernetes.io/projected/9098edbc-6c4b-444b-8214-5848756ec94b-kube-api-access-p2cq2\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.465767 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9098edbc-6c4b-444b-8214-5848756ec94b-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.487815 4778 generic.go:334] "Generic (PLEG): container finished" podID="9098edbc-6c4b-444b-8214-5848756ec94b" containerID="db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64" exitCode=0 Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.487902 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bthl5" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.487925 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bthl5" event={"ID":"9098edbc-6c4b-444b-8214-5848756ec94b","Type":"ContainerDied","Data":"db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64"} Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.488003 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bthl5" event={"ID":"9098edbc-6c4b-444b-8214-5848756ec94b","Type":"ContainerDied","Data":"7e43af4c8ac9f109aea2498c7d43bec693ffd79761be06aa8860c32373c46a08"} Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.488031 4778 scope.go:117] "RemoveContainer" containerID="db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.518472 4778 scope.go:117] "RemoveContainer" containerID="f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.536835 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bthl5"] Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.551590 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bthl5"] Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.555587 4778 scope.go:117] "RemoveContainer" containerID="f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.591943 4778 scope.go:117] "RemoveContainer" containerID="db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64" Mar 12 15:24:11 crc kubenswrapper[4778]: E0312 15:24:11.592423 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64\": container with ID starting with db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64 not found: ID does not exist" containerID="db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.592458 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64"} err="failed to get container status \"db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64\": rpc error: code = NotFound desc = could not find container \"db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64\": container with ID starting with db9178efd3232af4d713b97808176864833cbcacd596ac79e639c4e1dcb27c64 not found: ID does not exist" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.592480 4778 scope.go:117] "RemoveContainer" containerID="f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88" Mar 12 15:24:11 crc kubenswrapper[4778]: E0312 15:24:11.592849 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88\": container with ID starting with f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88 not found: ID does not exist" containerID="f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.592896 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88"} err="failed to get container status \"f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88\": rpc error: code = NotFound desc = could not find container \"f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88\": container with ID starting with f5209881605c74797474a49d590f6fd719f3b29aca37efbdd12b057d5f338a88 not found: ID does not exist" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.592927 4778 scope.go:117] "RemoveContainer" containerID="f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682" Mar 12 15:24:11 crc kubenswrapper[4778]: E0312 15:24:11.593333 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682\": container with ID starting with f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682 not found: ID does not exist" containerID="f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682" Mar 12 15:24:11 crc kubenswrapper[4778]: I0312 15:24:11.593360 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682"} err="failed to get container status \"f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682\": rpc error: code = NotFound desc = could not find container \"f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682\": container with ID starting with f6d9f45cc4cf5a401a409f223998e4fac1829a853cedf559f6486e5de7a6a682 not found: ID does not exist" Mar 12 15:24:12 crc kubenswrapper[4778]: I0312 15:24:12.271892 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" path="/var/lib/kubelet/pods/9098edbc-6c4b-444b-8214-5848756ec94b/volumes" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.164379 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-624v7"] Mar 12 15:24:13 crc kubenswrapper[4778]: E0312 15:24:13.165155 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" containerName="extract-utilities" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.165427 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" containerName="extract-utilities" Mar 12 15:24:13 crc kubenswrapper[4778]: E0312 15:24:13.165454 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" containerName="extract-content" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.165464 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" containerName="extract-content" Mar 12 15:24:13 crc kubenswrapper[4778]: E0312 15:24:13.165479 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" containerName="registry-server" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.165501 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" containerName="registry-server" Mar 12 15:24:13 crc kubenswrapper[4778]: E0312 15:24:13.165543 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146dd143-ec48-4ae2-9989-08072e1c770f" containerName="oc" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.165552 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="146dd143-ec48-4ae2-9989-08072e1c770f" containerName="oc" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.165770 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="9098edbc-6c4b-444b-8214-5848756ec94b" containerName="registry-server" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.165800 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="146dd143-ec48-4ae2-9989-08072e1c770f" containerName="oc" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.167668 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.196178 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-624v7"] Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.311652 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-utilities\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.311908 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-catalog-content\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.312012 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjpp\" (UniqueName: \"kubernetes.io/projected/4ece9e25-935e-4afe-90b9-91e6e2da84b5-kube-api-access-fjjpp\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.414521 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-utilities\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.415041 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-catalog-content\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.415293 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjjpp\" (UniqueName: \"kubernetes.io/projected/4ece9e25-935e-4afe-90b9-91e6e2da84b5-kube-api-access-fjjpp\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.416851 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-utilities\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.416934 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-catalog-content\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.449065 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjjpp\" (UniqueName: \"kubernetes.io/projected/4ece9e25-935e-4afe-90b9-91e6e2da84b5-kube-api-access-fjjpp\") pod \"redhat-marketplace-624v7\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:13 crc kubenswrapper[4778]: I0312 15:24:13.512255 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:14 crc kubenswrapper[4778]: I0312 15:24:14.010370 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-624v7"] Mar 12 15:24:14 crc kubenswrapper[4778]: W0312 15:24:14.019369 4778 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ece9e25_935e_4afe_90b9_91e6e2da84b5.slice/crio-292e0499b32d3169665fd737462df8f648748b2d8088208be447da320204f9e0 WatchSource:0}: Error finding container 292e0499b32d3169665fd737462df8f648748b2d8088208be447da320204f9e0: Status 404 returned error can't find the container with id 292e0499b32d3169665fd737462df8f648748b2d8088208be447da320204f9e0 Mar 12 15:24:14 crc kubenswrapper[4778]: I0312 15:24:14.526541 4778 generic.go:334] "Generic (PLEG): container finished" podID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerID="add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f" exitCode=0 Mar 12 15:24:14 crc kubenswrapper[4778]: I0312 15:24:14.526646 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-624v7" event={"ID":"4ece9e25-935e-4afe-90b9-91e6e2da84b5","Type":"ContainerDied","Data":"add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f"} Mar 12 15:24:14 crc kubenswrapper[4778]: I0312 15:24:14.526898 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-624v7" event={"ID":"4ece9e25-935e-4afe-90b9-91e6e2da84b5","Type":"ContainerStarted","Data":"292e0499b32d3169665fd737462df8f648748b2d8088208be447da320204f9e0"} Mar 12 15:24:15 crc kubenswrapper[4778]: I0312 15:24:15.543064 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-624v7" event={"ID":"4ece9e25-935e-4afe-90b9-91e6e2da84b5","Type":"ContainerStarted","Data":"98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261"} Mar 12 15:24:16 crc kubenswrapper[4778]: I0312 15:24:16.557676 4778 generic.go:334] "Generic (PLEG): container finished" podID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerID="98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261" exitCode=0 Mar 12 15:24:16 crc kubenswrapper[4778]: I0312 15:24:16.557764 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-624v7" event={"ID":"4ece9e25-935e-4afe-90b9-91e6e2da84b5","Type":"ContainerDied","Data":"98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261"} Mar 12 15:24:17 crc kubenswrapper[4778]: I0312 15:24:17.571208 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-624v7" event={"ID":"4ece9e25-935e-4afe-90b9-91e6e2da84b5","Type":"ContainerStarted","Data":"bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c"} Mar 12 15:24:17 crc kubenswrapper[4778]: I0312 15:24:17.594331 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-624v7" podStartSLOduration=2.113434496 podStartE2EDuration="4.594312033s" podCreationTimestamp="2026-03-12 15:24:13 +0000 UTC" firstStartedPulling="2026-03-12 15:24:14.528474712 +0000 UTC m=+8072.977170158" lastFinishedPulling="2026-03-12 15:24:17.009352269 +0000 UTC m=+8075.458047695" observedRunningTime="2026-03-12 15:24:17.588661873 +0000 UTC m=+8076.037357299" watchObservedRunningTime="2026-03-12 15:24:17.594312033 +0000 UTC m=+8076.043007429" Mar 12 15:24:23 crc kubenswrapper[4778]: I0312 15:24:23.512985 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:23 crc kubenswrapper[4778]: I0312 15:24:23.514431 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:23 crc kubenswrapper[4778]: I0312 15:24:23.564730 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:23 crc kubenswrapper[4778]: I0312 15:24:23.687617 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:23 crc kubenswrapper[4778]: I0312 15:24:23.835592 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-624v7"] Mar 12 15:24:24 crc kubenswrapper[4778]: I0312 15:24:24.260012 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:24:24 crc kubenswrapper[4778]: E0312 15:24:24.260447 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:24:25 crc kubenswrapper[4778]: I0312 15:24:25.653239 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-624v7" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerName="registry-server" containerID="cri-o://bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c" gracePeriod=2 Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.160340 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.305803 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjjpp\" (UniqueName: \"kubernetes.io/projected/4ece9e25-935e-4afe-90b9-91e6e2da84b5-kube-api-access-fjjpp\") pod \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.306044 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-utilities\") pod \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.306083 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-catalog-content\") pod \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\" (UID: \"4ece9e25-935e-4afe-90b9-91e6e2da84b5\") " Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.307097 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-utilities" (OuterVolumeSpecName: "utilities") pod "4ece9e25-935e-4afe-90b9-91e6e2da84b5" (UID: "4ece9e25-935e-4afe-90b9-91e6e2da84b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.323158 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ece9e25-935e-4afe-90b9-91e6e2da84b5-kube-api-access-fjjpp" (OuterVolumeSpecName: "kube-api-access-fjjpp") pod "4ece9e25-935e-4afe-90b9-91e6e2da84b5" (UID: "4ece9e25-935e-4afe-90b9-91e6e2da84b5"). InnerVolumeSpecName "kube-api-access-fjjpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.354589 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ece9e25-935e-4afe-90b9-91e6e2da84b5" (UID: "4ece9e25-935e-4afe-90b9-91e6e2da84b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.408510 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjjpp\" (UniqueName: \"kubernetes.io/projected/4ece9e25-935e-4afe-90b9-91e6e2da84b5-kube-api-access-fjjpp\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.408564 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.408576 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ece9e25-935e-4afe-90b9-91e6e2da84b5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.666101 4778 generic.go:334] "Generic (PLEG): container finished" podID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerID="bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c" exitCode=0 Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.666143 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-624v7" event={"ID":"4ece9e25-935e-4afe-90b9-91e6e2da84b5","Type":"ContainerDied","Data":"bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c"} Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.666174 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-624v7" event={"ID":"4ece9e25-935e-4afe-90b9-91e6e2da84b5","Type":"ContainerDied","Data":"292e0499b32d3169665fd737462df8f648748b2d8088208be447da320204f9e0"} Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.666206 4778 scope.go:117] "RemoveContainer" containerID="bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.666238 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-624v7" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.700218 4778 scope.go:117] "RemoveContainer" containerID="98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.716629 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-624v7"] Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.726220 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-624v7"] Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.744689 4778 scope.go:117] "RemoveContainer" containerID="add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.774784 4778 scope.go:117] "RemoveContainer" containerID="bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c" Mar 12 15:24:26 crc kubenswrapper[4778]: E0312 15:24:26.775327 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c\": container with ID starting with bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c not found: ID does not exist" containerID="bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.775378 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c"} err="failed to get container status \"bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c\": rpc error: code = NotFound desc = could not find container \"bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c\": container with ID starting with bd78f932a13eccc34c3846d88901fc511de9ff51eed69ead88d60dde371edd5c not found: ID does not exist" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.775410 4778 scope.go:117] "RemoveContainer" containerID="98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261" Mar 12 15:24:26 crc kubenswrapper[4778]: E0312 15:24:26.775729 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261\": container with ID starting with 98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261 not found: ID does not exist" containerID="98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.775766 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261"} err="failed to get container status \"98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261\": rpc error: code = NotFound desc = could not find container \"98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261\": container with ID starting with 98452a35aa9cb5f952df88f711fde567dcc655a81afea23308d27415d8304261 not found: ID does not exist" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.775800 4778 scope.go:117] "RemoveContainer" containerID="add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f" Mar 12 15:24:26 crc kubenswrapper[4778]: E0312 15:24:26.776104 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f\": container with ID starting with add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f not found: ID does not exist" containerID="add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f" Mar 12 15:24:26 crc kubenswrapper[4778]: I0312 15:24:26.776149 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f"} err="failed to get container status \"add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f\": rpc error: code = NotFound desc = could not find container \"add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f\": container with ID starting with add7ffa810e8138e4f5a0aab11a8dee89cf7c127a5c0de47ba3820090ae76b3f not found: ID does not exist" Mar 12 15:24:27 crc kubenswrapper[4778]: I0312 15:24:27.256655 4778 scope.go:117] "RemoveContainer" containerID="27e746629157759d4e60a414cb672470c7ab54258b384fb1bc8e845de836c293" Mar 12 15:24:28 crc kubenswrapper[4778]: I0312 15:24:28.265555 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" path="/var/lib/kubelet/pods/4ece9e25-935e-4afe-90b9-91e6e2da84b5/volumes" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.626900 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5cghz"] Mar 12 15:24:36 crc kubenswrapper[4778]: E0312 15:24:36.627573 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerName="registry-server" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.627587 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerName="registry-server" Mar 12 15:24:36 crc kubenswrapper[4778]: E0312 15:24:36.627625 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerName="extract-content" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.627631 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerName="extract-content" Mar 12 15:24:36 crc kubenswrapper[4778]: E0312 15:24:36.627647 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerName="extract-utilities" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.627654 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerName="extract-utilities" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.627863 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ece9e25-935e-4afe-90b9-91e6e2da84b5" containerName="registry-server" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.629215 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.652108 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cghz"] Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.725410 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-catalog-content\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.725680 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcvrq\" (UniqueName: \"kubernetes.io/projected/84fd7f25-a437-4377-bbfc-e01ce99102f5-kube-api-access-mcvrq\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.725758 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-utilities\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.827218 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-catalog-content\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.827420 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcvrq\" (UniqueName: \"kubernetes.io/projected/84fd7f25-a437-4377-bbfc-e01ce99102f5-kube-api-access-mcvrq\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.827468 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-utilities\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.827761 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-catalog-content\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.827828 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-utilities\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.854483 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcvrq\" (UniqueName: \"kubernetes.io/projected/84fd7f25-a437-4377-bbfc-e01ce99102f5-kube-api-access-mcvrq\") pod \"redhat-operators-5cghz\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:36 crc kubenswrapper[4778]: I0312 15:24:36.951628 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:37 crc kubenswrapper[4778]: I0312 15:24:37.254423 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:24:37 crc kubenswrapper[4778]: E0312 15:24:37.254769 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:24:37 crc kubenswrapper[4778]: I0312 15:24:37.391327 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cghz"] Mar 12 15:24:37 crc kubenswrapper[4778]: I0312 15:24:37.795521 4778 generic.go:334] "Generic (PLEG): container finished" podID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerID="f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4" exitCode=0 Mar 12 15:24:37 crc kubenswrapper[4778]: I0312 15:24:37.795579 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cghz" event={"ID":"84fd7f25-a437-4377-bbfc-e01ce99102f5","Type":"ContainerDied","Data":"f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4"} Mar 12 15:24:37 crc kubenswrapper[4778]: I0312 15:24:37.795612 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cghz" event={"ID":"84fd7f25-a437-4377-bbfc-e01ce99102f5","Type":"ContainerStarted","Data":"df127f8faf420587aa69391568c947c876a1979be6d8f6373823e337e12688b1"} Mar 12 15:24:38 crc kubenswrapper[4778]: I0312 15:24:38.807643 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cghz" event={"ID":"84fd7f25-a437-4377-bbfc-e01ce99102f5","Type":"ContainerStarted","Data":"17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff"} Mar 12 15:24:41 crc kubenswrapper[4778]: I0312 15:24:41.840446 4778 generic.go:334] "Generic (PLEG): container finished" podID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerID="17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff" exitCode=0 Mar 12 15:24:41 crc kubenswrapper[4778]: I0312 15:24:41.840532 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cghz" event={"ID":"84fd7f25-a437-4377-bbfc-e01ce99102f5","Type":"ContainerDied","Data":"17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff"} Mar 12 15:24:42 crc kubenswrapper[4778]: I0312 15:24:42.852857 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cghz" event={"ID":"84fd7f25-a437-4377-bbfc-e01ce99102f5","Type":"ContainerStarted","Data":"a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24"} Mar 12 15:24:42 crc kubenswrapper[4778]: I0312 15:24:42.876766 4778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5cghz" podStartSLOduration=2.386085587 podStartE2EDuration="6.876744415s" podCreationTimestamp="2026-03-12 15:24:36 +0000 UTC" firstStartedPulling="2026-03-12 15:24:37.797701426 +0000 UTC m=+8096.246396832" lastFinishedPulling="2026-03-12 15:24:42.288360264 +0000 UTC m=+8100.737055660" observedRunningTime="2026-03-12 15:24:42.868882432 +0000 UTC m=+8101.317577838" watchObservedRunningTime="2026-03-12 15:24:42.876744415 +0000 UTC m=+8101.325439821" Mar 12 15:24:46 crc kubenswrapper[4778]: I0312 15:24:46.952080 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:46 crc kubenswrapper[4778]: I0312 15:24:46.952557 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:48 crc kubenswrapper[4778]: I0312 15:24:48.009931 4778 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5cghz" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="registry-server" probeResult="failure" output=< Mar 12 15:24:48 crc kubenswrapper[4778]: timeout: failed to connect service ":50051" within 1s Mar 12 15:24:48 crc kubenswrapper[4778]: > Mar 12 15:24:49 crc kubenswrapper[4778]: I0312 15:24:49.253728 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:24:49 crc kubenswrapper[4778]: E0312 15:24:49.253927 4778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2qx88_openshift-machine-config-operator(24438fc6-dab0-4a9e-8b97-2532da76d9cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" Mar 12 15:24:57 crc kubenswrapper[4778]: I0312 15:24:57.012364 4778 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:57 crc kubenswrapper[4778]: I0312 15:24:57.083467 4778 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:57 crc kubenswrapper[4778]: I0312 15:24:57.266449 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cghz"] Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.008009 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5cghz" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="registry-server" containerID="cri-o://a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24" gracePeriod=2 Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.475007 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.484762 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcvrq\" (UniqueName: \"kubernetes.io/projected/84fd7f25-a437-4377-bbfc-e01ce99102f5-kube-api-access-mcvrq\") pod \"84fd7f25-a437-4377-bbfc-e01ce99102f5\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.484863 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-catalog-content\") pod \"84fd7f25-a437-4377-bbfc-e01ce99102f5\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.484996 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-utilities\") pod \"84fd7f25-a437-4377-bbfc-e01ce99102f5\" (UID: \"84fd7f25-a437-4377-bbfc-e01ce99102f5\") " Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.485708 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-utilities" (OuterVolumeSpecName: "utilities") pod "84fd7f25-a437-4377-bbfc-e01ce99102f5" (UID: "84fd7f25-a437-4377-bbfc-e01ce99102f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.491944 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84fd7f25-a437-4377-bbfc-e01ce99102f5-kube-api-access-mcvrq" (OuterVolumeSpecName: "kube-api-access-mcvrq") pod "84fd7f25-a437-4377-bbfc-e01ce99102f5" (UID: "84fd7f25-a437-4377-bbfc-e01ce99102f5"). InnerVolumeSpecName "kube-api-access-mcvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.586658 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcvrq\" (UniqueName: \"kubernetes.io/projected/84fd7f25-a437-4377-bbfc-e01ce99102f5-kube-api-access-mcvrq\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.586690 4778 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-utilities\") on node \"crc\" DevicePath \"\"" Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.614478 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84fd7f25-a437-4377-bbfc-e01ce99102f5" (UID: "84fd7f25-a437-4377-bbfc-e01ce99102f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 12 15:24:59 crc kubenswrapper[4778]: I0312 15:24:59.688781 4778 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84fd7f25-a437-4377-bbfc-e01ce99102f5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.020322 4778 generic.go:334] "Generic (PLEG): container finished" podID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerID="a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24" exitCode=0 Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.020406 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cghz" event={"ID":"84fd7f25-a437-4377-bbfc-e01ce99102f5","Type":"ContainerDied","Data":"a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24"} Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.020495 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cghz" event={"ID":"84fd7f25-a437-4377-bbfc-e01ce99102f5","Type":"ContainerDied","Data":"df127f8faf420587aa69391568c947c876a1979be6d8f6373823e337e12688b1"} Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.020533 4778 scope.go:117] "RemoveContainer" containerID="a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.020426 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cghz" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.052866 4778 scope.go:117] "RemoveContainer" containerID="17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.077159 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cghz"] Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.088337 4778 scope.go:117] "RemoveContainer" containerID="f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.091910 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5cghz"] Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.148486 4778 scope.go:117] "RemoveContainer" containerID="a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24" Mar 12 15:25:00 crc kubenswrapper[4778]: E0312 15:25:00.149222 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24\": container with ID starting with a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24 not found: ID does not exist" containerID="a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.149268 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24"} err="failed to get container status \"a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24\": rpc error: code = NotFound desc = could not find container \"a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24\": container with ID starting with a22972203933d9845965e1329ce37e119f2e59e6ad314814e87c355e00869e24 not found: ID does not exist" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.149302 4778 scope.go:117] "RemoveContainer" containerID="17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff" Mar 12 15:25:00 crc kubenswrapper[4778]: E0312 15:25:00.149862 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff\": container with ID starting with 17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff not found: ID does not exist" containerID="17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.149899 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff"} err="failed to get container status \"17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff\": rpc error: code = NotFound desc = could not find container \"17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff\": container with ID starting with 17642ed29425aaa0855b90af2c32bcd15bb32608dd5f50cc19c672c2b335f6ff not found: ID does not exist" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.149920 4778 scope.go:117] "RemoveContainer" containerID="f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4" Mar 12 15:25:00 crc kubenswrapper[4778]: E0312 15:25:00.150380 4778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4\": container with ID starting with f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4 not found: ID does not exist" containerID="f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.150400 4778 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4"} err="failed to get container status \"f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4\": rpc error: code = NotFound desc = could not find container \"f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4\": container with ID starting with f43ae3082451efc1a94dfd26db7124dd7b4e7024343af8a61f26a1798fa14bc4 not found: ID does not exist" Mar 12 15:25:00 crc kubenswrapper[4778]: I0312 15:25:00.267015 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" path="/var/lib/kubelet/pods/84fd7f25-a437-4377-bbfc-e01ce99102f5/volumes" Mar 12 15:25:01 crc kubenswrapper[4778]: I0312 15:25:01.254712 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc" Mar 12 15:25:02 crc kubenswrapper[4778]: I0312 15:25:02.041612 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"2f264e55384fd404f3791d452c96ce0c6814e91ce77fcb670a6b1d8e59491c19"} Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.157700 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555486-8vkxc"] Mar 12 15:26:00 crc kubenswrapper[4778]: E0312 15:26:00.160778 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="extract-utilities" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.161096 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="extract-utilities" Mar 12 15:26:00 crc kubenswrapper[4778]: E0312 15:26:00.161230 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="extract-content" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.161350 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="extract-content" Mar 12 15:26:00 crc kubenswrapper[4778]: E0312 15:26:00.161465 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="registry-server" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.161564 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="registry-server" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.161955 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="84fd7f25-a437-4377-bbfc-e01ce99102f5" containerName="registry-server" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.163201 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555486-8vkxc" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.166545 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555486-8vkxc"] Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.166664 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.171574 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.172093 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.304229 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8xgd\" (UniqueName: \"kubernetes.io/projected/702a0ec3-2c1a-40be-a56d-7e905acbf6b9-kube-api-access-s8xgd\") pod \"auto-csr-approver-29555486-8vkxc\" (UID: \"702a0ec3-2c1a-40be-a56d-7e905acbf6b9\") " pod="openshift-infra/auto-csr-approver-29555486-8vkxc" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.406776 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8xgd\" (UniqueName: \"kubernetes.io/projected/702a0ec3-2c1a-40be-a56d-7e905acbf6b9-kube-api-access-s8xgd\") pod \"auto-csr-approver-29555486-8vkxc\" (UID: \"702a0ec3-2c1a-40be-a56d-7e905acbf6b9\") " pod="openshift-infra/auto-csr-approver-29555486-8vkxc" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.436084 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8xgd\" (UniqueName: \"kubernetes.io/projected/702a0ec3-2c1a-40be-a56d-7e905acbf6b9-kube-api-access-s8xgd\") pod \"auto-csr-approver-29555486-8vkxc\" (UID: \"702a0ec3-2c1a-40be-a56d-7e905acbf6b9\") " pod="openshift-infra/auto-csr-approver-29555486-8vkxc" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.492078 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555486-8vkxc" Mar 12 15:26:00 crc kubenswrapper[4778]: I0312 15:26:00.987219 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555486-8vkxc"] Mar 12 15:26:01 crc kubenswrapper[4778]: I0312 15:26:01.757689 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555486-8vkxc" event={"ID":"702a0ec3-2c1a-40be-a56d-7e905acbf6b9","Type":"ContainerStarted","Data":"9506c4ffad08560fc3f076fdd1a3ab68fec4f6c6c90f0edc5304acbb1150191e"} Mar 12 15:26:02 crc kubenswrapper[4778]: I0312 15:26:02.765743 4778 generic.go:334] "Generic (PLEG): container finished" podID="702a0ec3-2c1a-40be-a56d-7e905acbf6b9" containerID="8c1ffd9f3e762d8d56ae9588cf1e6a97e4b2779167ed83f673b3bea58bf7a686" exitCode=0 Mar 12 15:26:02 crc kubenswrapper[4778]: I0312 15:26:02.765895 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555486-8vkxc" event={"ID":"702a0ec3-2c1a-40be-a56d-7e905acbf6b9","Type":"ContainerDied","Data":"8c1ffd9f3e762d8d56ae9588cf1e6a97e4b2779167ed83f673b3bea58bf7a686"} Mar 12 15:26:04 crc kubenswrapper[4778]: I0312 15:26:04.124454 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555486-8vkxc" Mar 12 15:26:04 crc kubenswrapper[4778]: I0312 15:26:04.191995 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8xgd\" (UniqueName: \"kubernetes.io/projected/702a0ec3-2c1a-40be-a56d-7e905acbf6b9-kube-api-access-s8xgd\") pod \"702a0ec3-2c1a-40be-a56d-7e905acbf6b9\" (UID: \"702a0ec3-2c1a-40be-a56d-7e905acbf6b9\") " Mar 12 15:26:04 crc kubenswrapper[4778]: I0312 15:26:04.198723 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/702a0ec3-2c1a-40be-a56d-7e905acbf6b9-kube-api-access-s8xgd" (OuterVolumeSpecName: "kube-api-access-s8xgd") pod "702a0ec3-2c1a-40be-a56d-7e905acbf6b9" (UID: "702a0ec3-2c1a-40be-a56d-7e905acbf6b9"). InnerVolumeSpecName "kube-api-access-s8xgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:26:04 crc kubenswrapper[4778]: I0312 15:26:04.295329 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8xgd\" (UniqueName: \"kubernetes.io/projected/702a0ec3-2c1a-40be-a56d-7e905acbf6b9-kube-api-access-s8xgd\") on node \"crc\" DevicePath \"\"" Mar 12 15:26:04 crc kubenswrapper[4778]: I0312 15:26:04.789887 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555486-8vkxc" event={"ID":"702a0ec3-2c1a-40be-a56d-7e905acbf6b9","Type":"ContainerDied","Data":"9506c4ffad08560fc3f076fdd1a3ab68fec4f6c6c90f0edc5304acbb1150191e"} Mar 12 15:26:04 crc kubenswrapper[4778]: I0312 15:26:04.789944 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9506c4ffad08560fc3f076fdd1a3ab68fec4f6c6c90f0edc5304acbb1150191e" Mar 12 15:26:04 crc kubenswrapper[4778]: I0312 15:26:04.790020 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555486-8vkxc" Mar 12 15:26:05 crc kubenswrapper[4778]: I0312 15:26:05.222126 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555480-vwvb5"] Mar 12 15:26:05 crc kubenswrapper[4778]: I0312 15:26:05.233266 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555480-vwvb5"] Mar 12 15:26:06 crc kubenswrapper[4778]: I0312 15:26:06.266067 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e32f842-16d7-484b-a241-e24ea8d3db45" path="/var/lib/kubelet/pods/9e32f842-16d7-484b-a241-e24ea8d3db45/volumes" Mar 12 15:26:27 crc kubenswrapper[4778]: I0312 15:26:27.418769 4778 scope.go:117] "RemoveContainer" containerID="7d34b9f856d96ac0b056ec93139556664e1582951a8e260f7792f49806a39777" Mar 12 15:27:28 crc kubenswrapper[4778]: I0312 15:27:28.558269 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:27:28 crc kubenswrapper[4778]: I0312 15:27:28.558863 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:27:58 crc kubenswrapper[4778]: I0312 15:27:58.558306 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:27:58 crc kubenswrapper[4778]: I0312 15:27:58.559249 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.152423 4778 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29555488-4kkmd"] Mar 12 15:28:00 crc kubenswrapper[4778]: E0312 15:28:00.153144 4778 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702a0ec3-2c1a-40be-a56d-7e905acbf6b9" containerName="oc" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.153156 4778 state_mem.go:107] "Deleted CPUSet assignment" podUID="702a0ec3-2c1a-40be-a56d-7e905acbf6b9" containerName="oc" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.153385 4778 memory_manager.go:354] "RemoveStaleState removing state" podUID="702a0ec3-2c1a-40be-a56d-7e905acbf6b9" containerName="oc" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.153974 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555488-4kkmd" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.157559 4778 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-6c6gl" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.157608 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.160485 4778 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.170370 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555488-4kkmd"] Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.320212 4778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9xgt\" (UniqueName: \"kubernetes.io/projected/42dbfe18-8da1-4bec-930a-e3b7bcd92c51-kube-api-access-k9xgt\") pod \"auto-csr-approver-29555488-4kkmd\" (UID: \"42dbfe18-8da1-4bec-930a-e3b7bcd92c51\") " pod="openshift-infra/auto-csr-approver-29555488-4kkmd" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.422977 4778 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9xgt\" (UniqueName: \"kubernetes.io/projected/42dbfe18-8da1-4bec-930a-e3b7bcd92c51-kube-api-access-k9xgt\") pod \"auto-csr-approver-29555488-4kkmd\" (UID: \"42dbfe18-8da1-4bec-930a-e3b7bcd92c51\") " pod="openshift-infra/auto-csr-approver-29555488-4kkmd" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.448161 4778 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9xgt\" (UniqueName: \"kubernetes.io/projected/42dbfe18-8da1-4bec-930a-e3b7bcd92c51-kube-api-access-k9xgt\") pod \"auto-csr-approver-29555488-4kkmd\" (UID: \"42dbfe18-8da1-4bec-930a-e3b7bcd92c51\") " pod="openshift-infra/auto-csr-approver-29555488-4kkmd" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.478630 4778 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555488-4kkmd" Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.961746 4778 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29555488-4kkmd"] Mar 12 15:28:00 crc kubenswrapper[4778]: I0312 15:28:00.969915 4778 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 12 15:28:01 crc kubenswrapper[4778]: I0312 15:28:01.957429 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555488-4kkmd" event={"ID":"42dbfe18-8da1-4bec-930a-e3b7bcd92c51","Type":"ContainerStarted","Data":"8a4da0e264af878e82189df331aca7ee06cc91e92b71212b7b24a5e11eed7748"} Mar 12 15:28:02 crc kubenswrapper[4778]: I0312 15:28:02.969157 4778 generic.go:334] "Generic (PLEG): container finished" podID="42dbfe18-8da1-4bec-930a-e3b7bcd92c51" containerID="61f54057485efda2212c8748fb1d2f18f6df1a4a883303d5a0ce51c30f43d43c" exitCode=0 Mar 12 15:28:02 crc kubenswrapper[4778]: I0312 15:28:02.969242 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555488-4kkmd" event={"ID":"42dbfe18-8da1-4bec-930a-e3b7bcd92c51","Type":"ContainerDied","Data":"61f54057485efda2212c8748fb1d2f18f6df1a4a883303d5a0ce51c30f43d43c"} Mar 12 15:28:04 crc kubenswrapper[4778]: I0312 15:28:04.302434 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555488-4kkmd" Mar 12 15:28:04 crc kubenswrapper[4778]: I0312 15:28:04.405521 4778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9xgt\" (UniqueName: \"kubernetes.io/projected/42dbfe18-8da1-4bec-930a-e3b7bcd92c51-kube-api-access-k9xgt\") pod \"42dbfe18-8da1-4bec-930a-e3b7bcd92c51\" (UID: \"42dbfe18-8da1-4bec-930a-e3b7bcd92c51\") " Mar 12 15:28:04 crc kubenswrapper[4778]: I0312 15:28:04.426435 4778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42dbfe18-8da1-4bec-930a-e3b7bcd92c51-kube-api-access-k9xgt" (OuterVolumeSpecName: "kube-api-access-k9xgt") pod "42dbfe18-8da1-4bec-930a-e3b7bcd92c51" (UID: "42dbfe18-8da1-4bec-930a-e3b7bcd92c51"). InnerVolumeSpecName "kube-api-access-k9xgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 12 15:28:04 crc kubenswrapper[4778]: I0312 15:28:04.511567 4778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9xgt\" (UniqueName: \"kubernetes.io/projected/42dbfe18-8da1-4bec-930a-e3b7bcd92c51-kube-api-access-k9xgt\") on node \"crc\" DevicePath \"\"" Mar 12 15:28:04 crc kubenswrapper[4778]: I0312 15:28:04.992682 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29555488-4kkmd" event={"ID":"42dbfe18-8da1-4bec-930a-e3b7bcd92c51","Type":"ContainerDied","Data":"8a4da0e264af878e82189df331aca7ee06cc91e92b71212b7b24a5e11eed7748"} Mar 12 15:28:04 crc kubenswrapper[4778]: I0312 15:28:04.993441 4778 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4da0e264af878e82189df331aca7ee06cc91e92b71212b7b24a5e11eed7748" Mar 12 15:28:04 crc kubenswrapper[4778]: I0312 15:28:04.992969 4778 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29555488-4kkmd" Mar 12 15:28:05 crc kubenswrapper[4778]: I0312 15:28:05.404221 4778 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29555482-j5w56"] Mar 12 15:28:05 crc kubenswrapper[4778]: I0312 15:28:05.418898 4778 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29555482-j5w56"] Mar 12 15:28:06 crc kubenswrapper[4778]: I0312 15:28:06.276251 4778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343bef0b-4527-4d4b-a357-aa48cf3cbe98" path="/var/lib/kubelet/pods/343bef0b-4527-4d4b-a357-aa48cf3cbe98/volumes" Mar 12 15:28:27 crc kubenswrapper[4778]: I0312 15:28:27.550468 4778 scope.go:117] "RemoveContainer" containerID="5dbc873ca44737bada83a756ec7434fa60be6ec95b9ed80179e9560e37fb59ca" Mar 12 15:28:28 crc kubenswrapper[4778]: I0312 15:28:28.558043 4778 patch_prober.go:28] interesting pod/machine-config-daemon-2qx88 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 12 15:28:28 crc kubenswrapper[4778]: I0312 15:28:28.558330 4778 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 12 15:28:28 crc kubenswrapper[4778]: I0312 15:28:28.558371 4778 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" Mar 12 15:28:28 crc kubenswrapper[4778]: I0312 15:28:28.559050 4778 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f264e55384fd404f3791d452c96ce0c6814e91ce77fcb670a6b1d8e59491c19"} pod="openshift-machine-config-operator/machine-config-daemon-2qx88" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 12 15:28:28 crc kubenswrapper[4778]: I0312 15:28:28.559095 4778 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" podUID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerName="machine-config-daemon" containerID="cri-o://2f264e55384fd404f3791d452c96ce0c6814e91ce77fcb670a6b1d8e59491c19" gracePeriod=600 Mar 12 15:28:29 crc kubenswrapper[4778]: I0312 15:28:29.257768 4778 generic.go:334] "Generic (PLEG): container finished" podID="24438fc6-dab0-4a9e-8b97-2532da76d9cd" containerID="2f264e55384fd404f3791d452c96ce0c6814e91ce77fcb670a6b1d8e59491c19" exitCode=0 Mar 12 15:28:29 crc kubenswrapper[4778]: I0312 15:28:29.257836 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerDied","Data":"2f264e55384fd404f3791d452c96ce0c6814e91ce77fcb670a6b1d8e59491c19"} Mar 12 15:28:29 crc kubenswrapper[4778]: I0312 15:28:29.258739 4778 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2qx88" event={"ID":"24438fc6-dab0-4a9e-8b97-2532da76d9cd","Type":"ContainerStarted","Data":"887ae2a81a8eb1736aebba9abce0a10eca55df7862c12afecb5321377d470c54"} Mar 12 15:28:29 crc kubenswrapper[4778]: I0312 15:28:29.258791 4778 scope.go:117] "RemoveContainer" containerID="28185bb0bf8713237bbead875f67f2cbfd250e5d39c0866c90d3e073957181fc"